WorldWideScience

Sample records for decentralized workflow scheduling

  1. Decentralized Ground Staff Scheduling

    DEFF Research Database (Denmark)

    Sørensen, M. D.; Clausen, Jens

    2002-01-01

    scheduling is investigated. The airport terminal is divided into zones, where each zone consists of a set of stands geographically next to each other. Staff is assigned to work in only one zone and the staff scheduling is planned decentralized for each zone. The advantage of this approach is that the staff...... work in a smaller area of the terminal and thus spends less time walking between stands. When planning decentralized the allocation of stands to flights influences the staff scheduling since the workload in a zone depends on which flights are allocated to stands in the zone. Hence solving the problem...... depends on the actual stand allocation but also on the number of zones and the layout of these. A mathematical model of the problem is proposed, which integrates the stand allocation and the staff scheduling. A heuristic solution method is developed and applied on a real case from British Airways, London...

  2. Schedule-Aware Workflow Management Systems

    Science.gov (United States)

    Mans, Ronny S.; Russell, Nick C.; van der Aalst, Wil M. P.; Moleman, Arnold J.; Bakker, Piet J. M.

    Contemporary workflow management systems offer work-items to users through specific work-lists. Users select the work-items they will perform without having a specific schedule in mind. However, in many environments work needs to be scheduled and performed at particular times. For example, in hospitals many work-items are linked to appointments, e.g., a doctor cannot perform surgery without reserving an operating theater and making sure that the patient is present. One of the problems when applying workflow technology in such domains is the lack of calendar-based scheduling support. In this paper, we present an approach that supports the seamless integration of unscheduled (flow) and scheduled (schedule) tasks. Using CPN Tools we have developed a specification and simulation model for schedule-aware workflow management systems. Based on this a system has been realized that uses YAWL, Microsoft Exchange Server 2007, Outlook, and a dedicated scheduling service. The approach is illustrated using a real-life case study at the AMC hospital in the Netherlands. In addition, we elaborate on the experiences obtained when developing and implementing a system of this scale using formal techniques.

  3. A three-level atomicity model for decentralized workflow management systems

    Science.gov (United States)

    Ben-Shaul, Israel Z.; Heineman, George T.

    1996-12-01

    A workflow management system (WFMS) employs a workflow manager (WM) to execute and automate the various activities within a workflow. To protect the consistency of data, the WM encapsulates each activity with a transaction; a transaction manager (TM) then guarantees the atomicity of activities. Since workflows often group several activities together, the TM is responsible for guaranteeing the atomicity of these units. There are scalability issues, however, with centralized WFMSs. Decentralized WFMSs provide an architecture for multiple autonomous WFMSs to interoperate, thus accommodating multiple workflows and geographically-dispersed teams. When atomic units are composed of activities spread across multiple WFMSs, however, there is a conflict between global atomicity and local autonomy of each WFMS. This paper describes a decentralized atomicity model that enables workflow administrators to specify the scope of multi-site atomicity based upon the desired semantics of multi-site tasks in the decentralized WFMS. We describe an architecture that realizes our model and execution paradigm.

  4. Requirements for Secure Logging of Decentralized Cross-Organizational Workflow Executions

    NARCIS (Netherlands)

    Wombacher, Andreas; Wieringa, Roelf J.; Jonker, Willem; Knezevic, P.; Pokraev, S.; meersman, R; Tari, Z; herrero, p; Méndez, G.; Cavedon, L.; Martin, D.; Hinze, A.; Buchanan, G.

    2005-01-01

    The control of actions performed by parties involved in a decentralized cross-organizational workflow is done by several independent workflow engines. Due to the lack of a centralized coordination control, an auditing is required which supports a reliable and secure detection of malicious actions

  5. Decentralized Utilitarian Mechanisms for Scheduling Games

    NARCIS (Netherlands)

    Cole, R.; Correa, J.; Gkatzelis, V.; Mirrokni, V.; Olver, N.K.

    2015-01-01

    Game Theory and Mechanism Design are by now standard tools for studying and designing massive decentralized systems. Unfortunately, designing mechanisms that induce socially efficient outcomes often requires full information and prohibitively large computational resources. In this work we study

  6. Decentralized Consistency Checking in Cross-organizational Workflows

    NARCIS (Netherlands)

    Wombacher, Andreas

    Service Oriented Architectures facilitate loosely coupled composed services, which are established in a decentralized way. One challenge for such composed services is to guarantee consistency, i.e., deadlock-freeness. This paper presents a decentralized approach to consistency checking, which

  7. Load scheduling for decentralized CHP plants

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg, orlov 31.07.2008; Madsen, Henrik; Nielsen, Torben Skov

    ) an interactive decision support tool by which optimal schedules can be found given the forecasts or user-defined modifications of the forecasts, and (iii) an automatic on-line system for monitoring when conditions have changed so that rescheduling is appropriate. In this report the focus is on methods applicable...... be obtained. Furthermore, we believe that all relevant forecasting methods are far too complicated to allow for this integration; both uncertainties originating from the dependence of heat load on climate and from meteorological forecasts need to be taken into account. Instead we suggest that the decision....... By letting the system find optimal schedules for each of these realizations the operator can gain some insight into the importance of the uncertainties. It is shown that with modern personal computers (e.g. 1 GHz Pentium III), operating systems (e.g. RedHat Linux 6.0), and compilers (e.g. GNU C 2...

  8. The robust schedule - A link to improved workflow

    DEFF Research Database (Denmark)

    Lindhard, Søren; Wandahl, Søren

    2012-01-01

    -down the contractors, and force them to rigorously adhere to the initial schedule. If delayed the work-pace or manpower has to be increased to observe the schedule. In attempt to improve productivity three independent site-mangers have been interviewed about time-scheduling. Their experiences and opinions have been...... analyzed and weaknesses in existing time scheduling have been found. The findings showed a negative side effect of keeping the schedule to tight. A too tight schedule is inflexible and cannot absorb variability in production. Flexibility is necessary because of the contractors interacting and dependable....... The result is a chaotic, complex and uncontrolled construction site. Furthermore, strict time limits entail the workflow to be optimized under non-optimal conditions. Even though productivity seems to be increasing, productivity per man-hour is decreasing resulting in increased cost. To increase productivity...

  9. Elastic Scheduling of Scientific Workflows under Deadline Constraints in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Nazia Anwar

    2018-01-01

    Full Text Available Scientific workflow applications are collections of several structured activities and fine-grained computational tasks. Scientific workflow scheduling in cloud computing is a challenging research topic due to its distinctive features. In cloud environments, it has become critical to perform efficient task scheduling resulting in reduced scheduling overhead, minimized cost and maximized resource utilization while still meeting the user-specified overall deadline. This paper proposes a strategy, Dynamic Scheduling of Bag of Tasks based workflows (DSB, for scheduling scientific workflows with the aim to minimize financial cost of leasing Virtual Machines (VMs under a user-defined deadline constraint. The proposed model groups the workflow into Bag of Tasks (BoTs based on data dependency and priority constraints and thereafter optimizes the allocation and scheduling of BoTs on elastic, heterogeneous and dynamically provisioned cloud resources called VMs in order to attain the proposed method’s objectives. The proposed approach considers pay-as-you-go Infrastructure as a Service (IaaS clouds having inherent features such as elasticity, abundance, heterogeneity and VM provisioning delays. A trace-based simulation using benchmark scientific workflows representing real world applications, demonstrates a significant reduction in workflow computation cost while the workflow deadline is met. The results validate that the proposed model produces better success rates to meet deadlines and cost efficiencies in comparison to adapted state-of-the-art algorithms for similar problems.

  10. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    Science.gov (United States)

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  11. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    Science.gov (United States)

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  12. Decentralized vs. centralized scheduling in wireless sensor networks for data fusion

    OpenAIRE

    Mitici, M.A.; Goseling, Jasper; de Graaf, Maurits; Boucherie, Richardus J.

    2014-01-01

    We consider the problem of data estimation in a sensor wireless network where sensors transmit their observations according to decentralized and centralized transmission schedules. A data collector is interested in achieving a data estimation using several sensor observations such that the variance of the estimation is below a targeted threshold. We analyze the waiting time for a collector to receive sufficient sensor observations. We show that, for sufficiently large sensor sets, the decentr...

  13. An extended Intelligent Water Drops algorithm for workflow scheduling in cloud computing environment

    Directory of Open Access Journals (Sweden)

    Shaymaa Elsherbiny

    2018-03-01

    Full Text Available Cloud computing is emerging as a high performance computing environment with a large scale, heterogeneous collection of autonomous systems and flexible computational architecture. Many resource management methods may enhance the efficiency of the whole cloud computing system. The key part of cloud computing resource management is resource scheduling. Optimized scheduling of tasks on the cloud virtual machines is an NP-hard problem and many algorithms have been presented to solve it. The variations among these schedulers are due to the fact that the scheduling strategies of the schedulers are adapted to the changing environment and the types of tasks. The focus of this paper is on workflows scheduling in cloud computing, which is gaining a lot of attention recently because workflows have emerged as a paradigm to represent complex computing problems. We proposed a novel algorithm extending the natural-based Intelligent Water Drops (IWD algorithm that optimizes the scheduling of workflows on the cloud. The proposed algorithm is implemented and embedded within the workflows simulation toolkit and tested in different simulated cloud environments with different cost models. Our algorithm showed noticeable enhancements over the classical workflow scheduling algorithms. We made a comparison between the proposed IWD-based algorithm with other well-known scheduling algorithms, including MIN-MIN, MAX-MIN, Round Robin, FCFS, and MCT, PSO and C-PSO, where the proposed algorithm presented noticeable enhancements in the performance and cost in most situations.

  14. Deadline-constrained workflow scheduling algorithms for Infrastructure as a Service Clouds

    NARCIS (Netherlands)

    Abrishami, S.; Naghibzadeh, M.; Epema, D.H.J.

    2013-01-01

    The advent of Cloud computing as a new model of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the

  15. A decentralized scheduling algorithm for time synchronized channel hopping

    Directory of Open Access Journals (Sweden)

    Andrew Tinka

    2011-09-01

    Full Text Available Time Synchronized Channel Hopping (TSCH is an existing Medium Access Control scheme which enables robust communication through channel hopping and high data rates through synchronization. It is based on a time-slotted architecture, and its correct functioning depends on a schedule which is typically computed by a central node. This paper presents, to our knowledge, the first scheduling algorithm for TSCH networks which both is distributed and which copes with mobile nodes. Two variations on scheduling algorithms are presented. Aloha-based scheduling allocates one channel for broadcasting advertisements for new neighbors. Reservation- based scheduling augments Aloha-based scheduling with a dedicated timeslot for targeted advertisements based on gossip information. A mobile ad hoc motorized sensor network with frequent connectivity changes is studied, and the performance of the two proposed algorithms is assessed. This performance analysis uses both simulation results and the results of a field deployment of floating wireless sensors in an estuarial canal environment. Reservation-based scheduling performs significantly better than Aloha-based scheduling, suggesting that the improved network reactivity is worth the increased algorithmic complexity and resource consumption.

  16. Decentralized vs. centralized scheduling in wireless sensor networks for data fusion

    NARCIS (Netherlands)

    Mitici, M.A.; Goseling, Jasper; de Graaf, Maurits; Boucherie, Richardus J.

    2014-01-01

    We consider the problem of data estimation in a sensor wireless network where sensors transmit their observations according to decentralized and centralized transmission schedules. A data collector is interested in achieving a data estimation using several sensor observations such that the variance

  17. Decentralized vs. centralized scheduling in wireless sensor networks for data fusion

    NARCIS (Netherlands)

    Mitici, Mihaela; Goseling, Jasper; de Graaf, Maurits; Boucherie, Richardus J.

    2013-01-01

    We consider the problem of data estimation in a sensor wireless network where sensors transmit their observations according to decentralized and centralized transmission schedules. A data collector is interested in achieving a data estimation using several sensor observations such that the variance

  18. Flexible Data-Aware Scheduling for Workflows over an In-Memory Object Store

    Energy Technology Data Exchange (ETDEWEB)

    Duro, Francisco Rodrigo; Garcia Blas, Javier; Isaila, Florin; Wozniak, Justin M.; Carretero, Jesus; Ross, Rob

    2016-01-01

    This paper explores novel techniques for improving the performance of many-task workflows based on the Swift scripting language. We propose novel programmer options for automated distributed data placement and task scheduling. These options trigger a data placement mechanism used for distributing intermediate workflow data over the servers of Hercules, a distributed key-value store that can be used to cache file system data. We demonstrate that these new mechanisms can significantly improve the aggregated throughput of many-task workflows with up to 86x, reduce the contention on the shared file system, exploit the data locality, and trade off locality and load balance.

  19. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    Science.gov (United States)

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  20. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Sonia Yassa

    2013-01-01

    Full Text Available We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  1. Decentralization and mechanism design for online machine scheduling

    NARCIS (Netherlands)

    Arge, Lars; Heydenreich, Birgit; Müller, Rudolf; Freivalds, Rusins; Uetz, Marc Jochen

    We study the online version of the classical parallel machine scheduling problem to minimize the total weighted completion time from a new perspective: We assume that the data of each job, namely its release date $r_j$, its processing time $p_j$ and its weight $w_j$ is only known to the job itself,

  2. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    Directory of Open Access Journals (Sweden)

    Maciej Malawski

    2015-01-01

    Full Text Available This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL and allows us to minimize the cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.

  3. REPNET: project scheduling and workflow optimization for Construction Projects

    Directory of Open Access Journals (Sweden)

    Marco Alvise Bragadin

    2013-10-01

    Full Text Available Project planning and control are core processes for construction management. In practice project planning is achieved by network - based techniques like Precedence Diagramming Method (PDM.Indeed many researchers and practitioners claims that networking techniques as such do not provide a suitable model for construction projects. Construction process modeling should incorporate for specific features of resource flows through project activities. So an improved resource scheduling method for construction is developed, called REPNET, based on a precedence network plotted on a resource–space chart and presented with a flow-line chart. The heuristics of REPNET are used to carry out resource timing while optimizing processes flows and resource usage. The method has been tested on a sample project.

  4. A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.

    Science.gov (United States)

    Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary

    2017-12-01

    Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

  5. MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler

    Science.gov (United States)

    Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre

    This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.

  6. A Hybrid Metaheuristic for Multi-Objective Scientific Workflow Scheduling in a Cloud Environment

    Directory of Open Access Journals (Sweden)

    Nazia Anwar

    2018-03-01

    Full Text Available Cloud computing has emerged as a high-performance computing environment with a large pool of abstracted, virtualized, flexible, and on-demand resources and services. Scheduling of scientific workflows in a distributed environment is a well-known NP-complete problem and therefore intractable with exact solutions. It becomes even more challenging in the cloud computing platform due to its dynamic and heterogeneous nature. The aim of this study is to optimize multi-objective scheduling of scientific workflows in a cloud computing environment based on the proposed metaheuristic-based algorithm, Hybrid Bio-inspired Metaheuristic for Multi-objective Optimization (HBMMO. The strong global exploration ability of the nature-inspired metaheuristic Symbiotic Organisms Search (SOS is enhanced by involving an efficient list-scheduling heuristic, Predict Earliest Finish Time (PEFT, in the proposed algorithm to obtain better convergence and diversity of the approximate Pareto front in terms of reduced makespan, minimized cost, and efficient load balance of the Virtual Machines (VMs. The experiments using different scientific workflow applications highlight the effectiveness, practicality, and better performance of the proposed algorithm.

  7. Optimization of workflow scheduling in Utility Management System with hierarchical neural network

    Directory of Open Access Journals (Sweden)

    Srdjan Vukmirovic

    2011-08-01

    Full Text Available Grid computing could be the future computing paradigm for enterprise applications, one of its benefits being that it can be used for executing large scale applications. Utility Management Systems execute very large numbers of workflows with very high resource requirements. This paper proposes architecture for a new scheduling mechanism that dynamically executes a scheduling algorithm using feedback about the current status Grid nodes. Two Artificial Neural Networks were created in order to solve the scheduling problem. A case study is created for the Meter Data Management system with measurements from the Smart Metering system for the city of Novi Sad, Serbia. Performance tests show that significant improvement of overall execution time can be achieved by Hierarchical Artificial Neural Networks.

  8. Distributed late-binding micro-scheduling and data caching for data-intensive workflows

    International Nuclear Information System (INIS)

    Delgado Peris, A.

    2015-01-01

    Today's world is flooded with vast amounts of digital information coming from innumerable sources. Moreover, it seems clear that this trend will only intensify in the future. Industry, society and remarkably science are not indifferent to this fact. On the contrary, they are struggling to get the most out of this data, which means that they need to capture, transfer, store and process it in a timely and efficient manner, using a wide range of computational resources. And this task is not always simple. A very representative example of the challenges posed by the management and processing of large quantities of data is that of the Large Hadron Collider experiments, which handle tens of petabytes of physics information every year. Based on the experience of one of these collaborations, we have studied the main issues involved in the management of huge volumes of data and in the completion of sizeable workflows that consume it. In this context, we have developed a general-purpose architecture for the scheduling and execution of workflows with heavy data requirements: the Task Queue. This new system builds on the late-binding overlay model, which has helped experiments to successfully overcome the problems associated to the heterogeneity and complexity of large computational grids. Our proposal introduces several enhancements to the existing systems. The execution agents of the Task Queue architecture share a Distributed Hash Table (DHT) and perform job matching and assignment cooperatively. In this way, scalability problems of centralized matching algorithms are avoided and workflow execution times are improved. Scalability makes fine-grained micro-scheduling possible and enables new functionalities, like the implementation of a distributed data cache on the execution nodes and the integration of data location information in the scheduling decisions...(Author)

  9. Low Latency Workflow Scheduling and an Application of Hyperspectral Brightness Temperatures

    Science.gov (United States)

    Nguyen, P. T.; Chapman, D. R.; Halem, M.

    2012-12-01

    New system analytics for Big Data computing holds the promise of major scientific breakthroughs and discoveries from the exploration and mining of the massive data sets becoming available to the science community. However, such data intensive scientific applications face severe challenges in accessing, managing and analyzing petabytes of data. While the Hadoop MapReduce environment has been successfully applied to data intensive problems arising in business, there are still many scientific problem domains where limitations in the functionality of MapReduce systems prevent its wide adoption by those communities. This is mainly because MapReduce does not readily support the unique science discipline needs such as special science data formats, graphic and computational data analysis tools, maintaining high degrees of computational accuracies, and interfacing with application's existing components across heterogeneous computing processors. We address some of these limitations by exploiting the MapReduce programming model for satellite data intensive scientific problems and address scalability, reliability, scheduling, and data management issues when dealing with climate data records and their complex observational challenges. In addition, we will present techniques to support the unique Earth science discipline needs such as dealing with special science data formats (HDF and NetCDF). We have developed a Hadoop task scheduling algorithm that improves latency by 2x for a scientific workflow including the gridding of the EOS AIRS hyperspectral Brightness Temperatures (BT). This workflow processing algorithm has been tested at the Multicore Computing Center private Hadoop based Intel Nehalem cluster, as well as in a virtual mode under the Open Source Eucalyptus cloud. The 55TB AIRS hyperspectral L1b Brightness Temperature record has been gridded at the resolution of 0.5x1.0 degrees, and we have computed a 0.9 annual anti-correlation to the El Nino Southern oscillation in

  10. Task Balanced Workflow Scheduling Technique considering Task Processing Rate in Spot Market

    Directory of Open Access Journals (Sweden)

    Daeyong Jung

    2014-01-01

    Full Text Available Recently, the cloud computing is a computing paradigm that constitutes an advanced computing environment that evolved from the distributed computing. And the cloud computing provides acquired computing resources in a pay-as-you-go manner. For example, Amazon EC2 offers the Infrastructure-as-a-Service (IaaS instances in three different ways with different price, reliability, and various performances of instances. Our study is based on the environment using spot instances. Spot instances can significantly decrease costs compared to reserved and on-demand instances. However, spot instances give a more unreliable environment than other instances. In this paper, we propose the workflow scheduling scheme that reduces the out-of-bid situation. Consequently, the total task completion time is decreased. The simulation results reveal that, compared to various instance types, our scheme achieves performance improvements in terms of an average combined metric of 12.76% over workflow scheme without considering the processing rate. However, the cost in our scheme is higher than an instance with low performance and is lower than an instance with high performance.

  11. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems.

    Science.gov (United States)

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.

  12. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2015-01-01

    Full Text Available Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO and Particle Swarm Optimization (PSO have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.

  13. Decentralized Job Scheduling in the Cloud Based on a Spatially Generalized Prisoner’s Dilemma Game

    Directory of Open Access Journals (Sweden)

    Gąsior Jakub

    2015-12-01

    Full Text Available We present in this paper a novel distributed solution to a security-aware job scheduling problem in cloud computing infrastructures. We assume that the assignment of the available resources is governed exclusively by the specialized brokers assigned to individual users submitting their jobs to the system. The goal of this scheme is allocating a limited quantity of resources to a specific number of jobs minimizing their execution failure probability and total completion time. Our approach is based on the Pareto dominance relationship and implemented at an individual user level. To select the best scheduling strategies from the resulting Pareto frontiers and construct a global scheduling solution, we developed a decision-making mechanism based on the game-theoretic model of Spatial Prisoner’s Dilemma, realized by selfish agents operating in the two-dimensional cellular automata space. Their behavior is conditioned by the objectives of the various entities involved in the scheduling process and driven towards a Nash equilibrium solution by the employed social welfare criteria. The performance of the scheduler applied is verified by a number of numerical experiments. The related results show the effectiveness and scalability of the scheme in the presence of a large number of jobs and resources involved in the scheduling process.

  14. Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2018-01-01

    Full Text Available Cloud computing environment provides several on-demand services and resource sharing for clients. Business processes are managed using the workflow technology over the cloud, which represents one of the challenges in using the resources in an efficient manner due to the dependencies between the tasks. In this paper, a Hybrid GA-PSO algorithm is proposed to allocate tasks to the resources efficiently. The Hybrid GA-PSO algorithm aims to reduce the makespan and the cost and balance the load of the dependent tasks over the heterogonous resources in cloud computing environments. The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA, WSGA, and MTCT algorithms. Furthermore, it reduces the execution cost. In addition, it improves the load balancing of the workflow application over the available resources. Finally, the obtained results also proved that the proposed algorithm converges to optimal solutions faster and with higher quality compared to other algorithms.

  15. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    Energy Technology Data Exchange (ETDEWEB)

    Messer, Bronson [ORNL; Sewell, Christopher [Los Alamos National Laboratory (LANL); Heitmann, Katrin [ORNL; Finkel, Dr. Hal J [Argonne National Laboratory (ANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Zagaris, George [Lawrence Livermore National Laboratory (LLNL); Pope, Adrian [Los Alamos National Laboratory (LANL); Habib, Salman [ORNL; Parete-Koon, Suzanne T [ORNL

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial in situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.

  16. Insightful Workflow For Grid Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Charles Earl

    2008-10-09

    We developed a workflow adaptation and scheduling system for Grid workflow. The system currently interfaces with and uses the Karajan workflow system. We developed machine learning agents that provide the planner/scheduler with information needed to make decisions about when and how to replan. The Kubrick restructures workflow at runtime, making it unique among workflow scheduling systems. The existing Kubrick system provides a platform on which to integrate additional quality of service constraints and in which to explore the use of an ensemble of scheduling and planning algorithms. This will be the principle thrust of our Phase II work.

  17. A performance study of grid workflow engines

    NARCIS (Netherlands)

    Stratan, C.; Iosup, A.; Epema, D.H.J.

    2008-01-01

    To benefit from grids, scientists require grid workflow engines that automatically manage the execution of inter-related jobs on the grid infrastructure. So far, the workflows community has focused on scheduling algorithms and on interface tools. Thus, while several grid workflow engines have been

  18. Distributed late-binding micro-scheduling and data caching for data-intensive workflows; Microplanificación de asignación tardía y almacenamiento temporal distribuidos para flujos de trabajo intensivos en datos

    Energy Technology Data Exchange (ETDEWEB)

    Delgado Peris, A.

    2015-07-01

    Today's world is flooded with vast amounts of digital information coming from innumerable sources. Moreover, it seems clear that this trend will only intensify in the future. Industry, society and remarkably science are not indifferent to this fact. On the contrary, they are struggling to get the most out of this data, which means that they need to capture, transfer, store and process it in a timely and efficient manner, using a wide range of computational resources. And this task is not always simple. A very representative example of the challenges posed by the management and processing of large quantities of data is that of the Large Hadron Collider experiments, which handle tens of petabytes of physics information every year. Based on the experience of one of these collaborations, we have studied the main issues involved in the management of huge volumes of data and in the completion of sizeable workflows that consume it. In this context, we have developed a general-purpose architecture for the scheduling and execution of workflows with heavy data requirements: the Task Queue. This new system builds on the late-binding overlay model, which has helped experiments to successfully overcome the problems associated to the heterogeneity and complexity of large computational grids. Our proposal introduces several enhancements to the existing systems. The execution agents of the Task Queue architecture share a Distributed Hash Table (DHT) and perform job matching and assignment cooperatively. In this way, scalability problems of centralized matching algorithms are avoided and workflow execution times are improved. Scalability makes fine-grained micro-scheduling possible and enables new functionalities, like the implementation of a distributed data cache on the execution nodes and the integration of data location information in the scheduling decisions...(Author)

  19. Constructing Workflows from Script Applications

    Directory of Open Access Journals (Sweden)

    Mikołaj Baranowski

    2012-01-01

    Full Text Available For programming and executing complex applications on grid infrastructures, scientific workflows have been proposed as convenient high-level alternative to solutions based on general-purpose programming languages, APIs and scripts. GridSpace is a collaborative programming and execution environment, which is based on a scripting approach and it extends Ruby language with a high-level API for invoking operations on remote resources. In this paper we describe a tool which enables to convert the GridSpace application source code into a workflow representation which, in turn, may be used for scheduling, provenance, or visualization. We describe how we addressed the issues of analyzing Ruby source code, resolving variable and method dependencies, as well as building workflow representation. The solutions to these problems have been developed and they were evaluated by testing them on complex grid application workflows such as CyberShake, Epigenomics and Montage. Evaluation is enriched by representing typical workflow control flow patterns.

  20. Integration of services into workflow applications

    CERN Document Server

    Czarnul, Pawel

    2015-01-01

    Describing state-of-the-art solutions in distributed system architectures, Integration of Services into Workflow Applications presents a concise approach to the integration of loosely coupled services into workflow applications. It discusses key challenges related to the integration of distributed systems and proposes solutions, both in terms of theoretical aspects such as models and workflow scheduling algorithms, and technical solutions such as software tools and APIs.The book provides an in-depth look at workflow scheduling and proposes a way to integrate several different types of services

  1. Decentralized scheduling algorithm to improve the rate of production without increasing the stocks of intermediates; Zaiko wo fuyasu kotonaku seisan sokudo wo kojo saseru jiritsu bunsangata sukejuringu arugorizumu

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Kazuhiro; Shiigi, Daisuke; Tsuge, Yoshifumi; Matsuyama, Hisayoshi [Kyushu University, Fukuoka (Japan). Department of Chemical Engineering

    1999-03-10

    In a factory with a multi-stage production system, push-type scheduling methods can hardly improve the production rate without increasing the stocks of intermediates. For products whose specifications are known before their orders, a pull-type scheduling method named Kanban system has been developed, and has succeeded to improve the production rate without increasing the stocks of intermediates. The Kanban system, however, is not applicable to custom-made products whose specifications are not known before their orders. In this paper, a 'Virtual Kanban (VK) system' is presented as a pull-type scheduling method which is applicable to custom-made products, and its usefulness is demonstrated by simulation of an application specific integrated circuit manufacturing facility. (author)

  2. Efficient Scheduling of Scientific Workflows with Energy Reduction Using Novel Discrete Particle Swarm Optimization and Dynamic Voltage Scaling for Computational Grids

    Directory of Open Access Journals (Sweden)

    M. Christobel

    2015-01-01

    Full Text Available One of the most significant and the topmost parameters in the real world computing environment is energy. Minimizing energy imposes benefits like reduction in power consumption, decrease in cooling rates of the computing processors, provision of a green environment, and so forth. In fact, computation time and energy are directly proportional to each other and the minimization of computation time may yield a cost effective energy consumption. Proficient scheduling of Bag-of-Tasks in the grid environment ravages in minimum computation time. In this paper, a novel discrete particle swarm optimization (DPSO algorithm based on the particle’s best position (pbDPSO and global best position (gbDPSO is adopted to find the global optimal solution for higher dimensions. This novel DPSO yields better schedule with minimum computation time compared to Earliest Deadline First (EDF and First Come First Serve (FCFS algorithms which comparably reduces energy. Other scheduling parameters, such as job completion ratio and lateness, are also calculated and compared with EDF and FCFS. An energy improvement of up to 28% was obtained when Makespan Conservative Energy Reduction (MCER and Dynamic Voltage Scaling (DVS were used in the proposed DPSO algorithm.

  3. Efficient Scheduling of Scientific Workflows with Energy Reduction Using Novel Discrete Particle Swarm Optimization and Dynamic Voltage Scaling for Computational Grids

    Science.gov (United States)

    Christobel, M.; Tamil Selvi, S.; Benedict, Shajulin

    2015-01-01

    One of the most significant and the topmost parameters in the real world computing environment is energy. Minimizing energy imposes benefits like reduction in power consumption, decrease in cooling rates of the computing processors, provision of a green environment, and so forth. In fact, computation time and energy are directly proportional to each other and the minimization of computation time may yield a cost effective energy consumption. Proficient scheduling of Bag-of-Tasks in the grid environment ravages in minimum computation time. In this paper, a novel discrete particle swarm optimization (DPSO) algorithm based on the particle's best position (pbDPSO) and global best position (gbDPSO) is adopted to find the global optimal solution for higher dimensions. This novel DPSO yields better schedule with minimum computation time compared to Earliest Deadline First (EDF) and First Come First Serve (FCFS) algorithms which comparably reduces energy. Other scheduling parameters, such as job completion ratio and lateness, are also calculated and compared with EDF and FCFS. An energy improvement of up to 28% was obtained when Makespan Conservative Energy Reduction (MCER) and Dynamic Voltage Scaling (DVS) were used in the proposed DPSO algorithm. PMID:26075296

  4. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS.

    Science.gov (United States)

    Kaushik, Gaurav; Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2017-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.

  5. Decentralized Software Architecture

    National Research Council Canada - National Science Library

    Khare, Rohit

    2002-01-01

    .... While the term "decentralization" is familiar from political and economic contexts, it has been applied extensively, if indiscriminately, to describe recent trends in software architecture towards...

  6. Decentralization in Air Transportation

    NARCIS (Netherlands)

    Udluft, H.

    2017-01-01

    In this work,we demonstrate that decentralized control can result in stable, efficient, and robust operations in the Air Transportation System. We implement decentralized control for aircraft taxiing operations and use Agent-Based Modeling and Simulation to analyze the resulting system behavior

  7. Integrating prediction, provenance, and optimization into high energy workflows

    Energy Technology Data Exchange (ETDEWEB)

    Schram, M.; Bansal, V.; Friese, R. D.; Tallent, N. R.; Yin, J.; Barker, K. J.; Stephan, E.; Halappanavar, M.; Kerbyson, D. J.

    2017-10-01

    We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.

  8. Organizational decentralization in radiology.

    Science.gov (United States)

    Aas, I H Monrad

    2006-01-01

    At present, most hospitals have a department of radiology where images are captured and interpreted. Decentralization is the opposite of centralization and means 'away from the centre'. With a Picture Archiving and Communication System (PACS) and broadband communications, transmitting radiology images between sites will be far easier than before. Qualitative interviews of 26 resource persons were performed in Norway. There was a response rate of 90%. Decentralization of radiology interpretations seems less relevant than centralization, but several forms of decentralization have a role to play. The respondents mentioned several advantages, including exploitation of capacity and competence. They also mentioned several disadvantages, including splitting professional communities and reduced contact between radiologists and clinicians. With the new technology decentralization and centralization of image interpretation are important possibilities in organizational change. This will be important for the future of teleradiology.

  9. Decentralization: Another Perspective

    Science.gov (United States)

    Chapman, Robin

    1973-01-01

    This paper attempts to pursue the centralization-decentralization dilemma. A setting for this discussion is provided by noting some of the uses of terminology, followed by a consideration of inherent difficulties in conceptualizing. (Author)

  10. Decentralized portfolio management

    OpenAIRE

    Coutinho, Paulo; Tabak, Benjamin Miranda

    2003-01-01

    We use a mean-variance model to analyze the problem of decentralized portfolio management. We find the solution for the optimal portfolio allocation for a head trader operating in n different markets, which is called the optimal centralized portfolio. However, as there are many traders specialized in different markets, the solution to the problem of optimal decentralized allocation should be different from the centralized case. In this paper we derive conditions for the solutions to be equiva...

  11. Workflow in Almaraz NPP

    International Nuclear Information System (INIS)

    Gonzalez Crego, E.; Martin Lopez-Suevos, C.

    2000-01-01

    Almaraz NPP decided to incorporate Workflow into its information system in response to the need to provide exhaustive follow-up and monitoring of each phase of the different procedures it manages. Oracle's Workflow was chosen for this purpose and it was integrated with previously developed applications. The objectives to be met in the incorporation of Workflow were as follows: Strict monitoring of procedures and processes. Detection of bottlenecks in the flow of information. Notification of those affected by pending tasks. Flexible allocation of tasks to user groups. Improved monitoring of management procedures. Improved communication. Similarly, special care was taken to: Integrate workflow processes with existing control panels. Synchronize workflow with installation procedures. Ensure that the system reflects use of paper forms. At present the Corrective Maintenance Request module is being operated using Workflow and the Work Orders and Notice of Order modules are about to follow suit. (Author)

  12. Decentralized Services Orchestration Using Intelligent Mobile Agents with Deadline Restrictions

    OpenAIRE

    Magalhães , Alex; Lung , Lau Cheuk; Rech , Luciana

    2010-01-01

    International audience; The necessity for better performance drives service orchestration towards decentralization. There is a recent approach where the integrator - that traditionally centralizes all corporative services and business logics - remains as a repository of interface services, but now lacks to know all business logics and business workflows. There are several techniques using this recent approach, including hybrid solutions, peer-to-peer solutions and trigger-based mechanisms. A ...

  13. Agreement Workflow Tool (AWT)

    Data.gov (United States)

    Social Security Administration — The Agreement Workflow Tool (AWT) is a role-based Intranet application used for processing SSA's Reimbursable Agreements according to SSA's standards. AWT provides...

  14. Comparison of Resource Platform Selection Approaches for Scientific Workflows

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Ramakrishnan, Lavanya

    2010-03-05

    Cloud computing is increasingly considered as an additional computational resource platform for scientific workflows. The cloud offers opportunity to scale-out applications from desktops and local cluster resources. At the same time, it can eliminate the challenges of restricted software environments and queue delays in shared high performance computing environments. Choosing from these diverse resource platforms for a workflow execution poses a challenge for many scientists. Scientists are often faced with deciding resource platform selection trade-offs with limited information on the actual workflows. While many workflow planning methods have explored task scheduling onto different resources, these methods often require fine-scale characterization of the workflow that is onerous for a scientist. In this position paper, we describe our early exploratory work into using blackbox characteristics to do a cost-benefit analysis across of using cloud platforms. We use only very limited high-level information on the workflow length, width, and data sizes. The length and width are indicative of the workflow duration and parallelism. The data size characterizes the IO requirements. We compare the effectiveness of this approach to other resource selection models using two exemplar scientific workflows scheduled on desktops, local clusters, HPC centers, and clouds. Early results suggest that the blackbox model often makes the same resource selections as a more fine-grained whitebox model. We believe the simplicity of the blackbox model can help inform a scientist on the applicability of cloud computing resources even before porting an existing workflow.

  15. A Prudent Approach to Fair Use Workflow

    Directory of Open Access Journals (Sweden)

    Karey Patterson

    2018-02-01

    Full Text Available This poster will outline a new highly efficient workflow for the management of copyright materials that is prudent and accommodates generally and legally accepted Fair Use limits. The workflow allows library or copyright staff an easy means to keep on top of their copyright obligations, manage licenses and review and adjust schedules but is still a highly efficient means to cope with large numbers of requests to use materials. The poster details speed and efficiency gains for professors and library staff while reducing legal exposure.

  16. Decentralization and Governance in Indonesia

    NARCIS (Netherlands)

    Holzhacker, Ronald; Wittek, Rafael; Woltjer, Johan

    2016-01-01

    I. Theoretical Reflections on Decentralization and Governance for Sustainable Society 1. Decentralization and Governance for Sustainable Society in Indonesia Ronald Holzhacker, Rafael Wittek and Johan Woltjer 2. Good Governance Contested: Exploring Human Rights and Sustainability as Normative Goals

  17. Data intensive ATLAS workflows in the Cloud

    CERN Document Server

    Rzehorz, Gerhard Ferdinand; The ATLAS collaboration

    2016-01-01

    This contribution reports on the feasibility of executing data intensive workflows on Cloud infrastructures. In order to assess this, the metric ETC = Events/Time/Cost is formed, which quantifies the different workflow and infrastructure configurations that are tested against each other. In these tests ATLAS reconstruction Jobs are run, examining the effects of overcommitting (more parallel processes running than CPU cores available), scheduling (staggered execution) and scaling (number of cores). The desirability of commissioning storage in the cloud is evaluated, in conjunction with a simple analytical model of the system, and correlated with questions about the network bandwidth, caches and what kind of storage to utilise. In the end a cost/benefit evaluation of different infrastructure configurations and workflows is undertaken, with the goal to find the maximum of the ETC value

  18. Data intensive ATLAS workflows in the Cloud

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00396985; The ATLAS collaboration; Keeble, Oliver; Quadt, Arnulf; Kawamura, Gen

    2017-01-01

    This contribution reports on the feasibility of executing data intensive workflows on Cloud infrastructures. In order to assess this, the metric ETC = Events/Time/Cost is formed, which quantifies the different workflow and infrastructure configurations that are tested against each other. In these tests ATLAS reconstruction Jobs are run, examining the effects of overcommitting (more parallel processes running than CPU cores available), scheduling (staggered execution) and scaling (number of cores). The desirability of commissioning storage in the Cloud is evaluated, in conjunction with a simple analytical model of the system, and correlated with questions about the network bandwidth, caches and what kind of storage to utilise. In the end a cost/benefit evaluation of different infrastructure configurations and workflows is undertaken, with the goal to find the maximum of the ETC value.

  19. Decentralized control: An overview

    Czech Academy of Sciences Publication Activity Database

    Bakule, Lubomír

    2008-01-01

    Roč. 32, č. 1 (2008), s. 87-98 ISSN 1367-5788 R&D Projects: GA AV ČR(CZ) IAA200750802; GA MŠk(CZ) LA 282 Institutional research plan: CEZ:AV0Z10750506 Keywords : decentralized control * large-scale systems * decomposition Subject RIV: BC - Control Systems Theory Impact factor: 1.109, year: 2008

  20. Decentralized control and communication

    Czech Academy of Sciences Publication Activity Database

    Bakule, Lubomír; Papík, Martin

    2012-01-01

    Roč. 36, č. 1 (2012), s. 1-10 ISSN 1367-5788 R&D Projects: GA MŠk(CZ) LG12014 Institutional research plan: CEZ:AV0Z10750506 Keywords : decentralization * communication * large-scale complex systems Subject RIV: BC - Control Systems Theory Impact factor: 1.289, year: 2012

  1. The Rhetoric of Decentralization

    Science.gov (United States)

    Ravitch, Diane

    1974-01-01

    Questions the rationale for and possible consequences of political decentralization of New York City. Suggests that the disadvantages--reduced level of professionalism, increased expense in multiple government operation, "stabilization" of residential segregation, necessity for budget negotiations because of public disclosure of tax…

  2. Decentralized Blended Acquisition

    NARCIS (Netherlands)

    Berkhout, A.J.

    2013-01-01

    The concept of blending and deblending is reviewed, making use of traditional and dispersed source arrays. The network concept of distributed blended acquisition is introduced. A million-trace robot system is proposed, illustrating that decentralization may bring about a revolution in the way we

  3. MACROECONOMIC IMPACT OF DECENTRALIZATION

    Directory of Open Access Journals (Sweden)

    Emilia Cornelia STOICA

    2014-05-01

    Full Text Available The concept of decentralization has a variety of expressions, but the meaning generally accepted refers to the transfer of authority and responsibility of the public functions from central government to sub-national public entities or even to the private sector. Decentralization process is complex, affecting many aspects of social and economic life and public management, and its design and implementation cover several stages, depending on the cyclical and structural developments of the country. From an economic perspective, decentralization is seen as a means of primary importance in terms of improving the effectiveness and efficiency of public services and macroeconomic stability due to the redistribution of public finances while in a much closer logic of the government policy objectives. But the decentralization process behaves as well some risks, because it involves the implementation of appropriate mechanisms for the establishment of income and expenditure programming at the subnational level, which, if is not correlated with macroeconomic policy imperatives can lead to major imbalances, both financially as in termes of economic and social life. Equally, ensuring the balance of the budget at the local level is imperative to fulfill, this goal imposing a legal framework and specific procedures to size transfers of public funds, targeted or untargeted. Also, public and local authorities have to adopt appropriate laws and regulations such that sub-national public entities can access loans - such as bank loans or debentures from domestic or external market - in terms of a strict monitoring national financial stability. In all aspects of decentralization - political, administrative, financial -, public authorities should develop and implement the most effective mechanisms to coordinate macroeconomic objectives and both sectoral and local interests and establish clear responsibilities - exclusive or shared - for all parties involved in the

  4. Querying Workflow Logs

    Directory of Open Access Journals (Sweden)

    Yan Tang

    2018-01-01

    Full Text Available A business process or workflow is an assembly of tasks that accomplishes a business goal. Business process management is the study of the design, configuration/implementation, enactment and monitoring, analysis, and re-design of workflows. The traditional methodology for the re-design and improvement of workflows relies on the well-known sequence of extract, transform, and load (ETL, data/process warehousing, and online analytical processing (OLAP tools. In this paper, we study the ad hoc queryiny of process enactments for (data-centric business processes, bypassing the traditional methodology for more flexibility in querying. We develop an algebraic query language based on “incident patterns” with four operators inspired from Business Process Model and Notation (BPMN representation, allowing the user to formulate ad hoc queries directly over workflow logs. A formal semantics of this query language, a preliminary query evaluation algorithm, and a group of elementary properties of the operators are provided.

  5. LQCD workflow execution framework: Models, provenance and fault-tolerance

    International Nuclear Information System (INIS)

    Piccoli, Luciano; Simone, James N; Kowalkowlski, James B; Dubey, Abhishek

    2010-01-01

    Large computing clusters used for scientific processing suffer from systemic failures when operated over long continuous periods for executing workflows. Diagnosing job problems and faults leading to eventual failures in this complex environment is difficult, specifically when the success of an entire workflow might be affected by a single job failure. In this paper, we introduce a model-based, hierarchical, reliable execution framework that encompass workflow specification, data provenance, execution tracking and online monitoring of each workflow task, also referred to as participants. The sequence of participants is described in an abstract parameterized view, which is translated into a concrete data dependency based sequence of participants with defined arguments. As participants belonging to a workflow are mapped onto machines and executed, periodic and on-demand monitoring of vital health parameters on allocated nodes is enabled according to pre-specified rules. These rules specify conditions that must be true pre-execution, during execution and post-execution. Monitoring information for each participant is propagated upwards through the reflex and healing architecture, which consists of a hierarchical network of decentralized fault management entities, called reflex engines. They are instantiated as state machines or timed automatons that change state and initiate reflexive mitigation action(s) upon occurrence of certain faults. We describe how this cluster reliability framework is combined with the workflow execution framework using formal rules and actions specified within a structure of first order predicate logic that enables a dynamic management design that reduces manual administrative workload, and increases cluster-productivity.

  6. Responsive web design workflow

    OpenAIRE

    LAAK, TIMO

    2013-01-01

    Responsive Web Design Workflow is a literature review about Responsive Web Design, a web standards based modern web design paradigm. The goals of this research were to define what responsive web design is, determine its importance in building modern websites and describe a workflow for responsive web design projects. Responsive web design is a paradigm to create adaptive websites, which respond to the properties of the media that is used to render them. The three key elements of responsi...

  7. Decentralized Portfolio Management

    Directory of Open Access Journals (Sweden)

    Benjamin Miranda Tabak

    2003-12-01

    Full Text Available We use a mean-variance model to analyze the problem of decentralized portfolio management. We find the solution for the optimal portfolio allocation for a head trader operating in n different markets, which is called the optimal centralized portfolio. However, as there are many traders specialized in different markets, the solution to the problem of optimal decentralized allocation should be different from the centralized case. In this paper we derive conditions for the solutions to be equivalent. We use multivariate normal returns and a negative exponential function to solve the problem analytically. We generate the equivalence of solutions by assuming that different traders face different interest rates for borrowing and lending. This interest rate is dependent on the ratio of the degrees of risk aversion of the trader and the head trader, on the excess return, and on the correlation between asset returns.

  8. Decentralization in Ethiopia

    OpenAIRE

    Gemechu, Mulugeta Debebbe

    2012-01-01

    Ethiopia officially launched the District Level Decentralization Program (DLDP) by the year 2002. The program flagged core objectives such as institutionalizing viable development centers at local levels, deepening devolution of power, enhancing the democratization process through broad-based participatory strategy, promoting good governance and improving service delivery. Since the inception of this program two strategic planning terms (one strategic term is five years) have already elapsed ...

  9. Centralized vs decentralized contests

    OpenAIRE

    Beviá, Carmen; Corchón, Luis C.

    2015-01-01

    We compare two contests, decentralized in which there are several independent contests with non overlapping contestants and centralized in which all contestants fight for a unique prize which is the sum of all prizes in the small contests. We study the relationship between payoffs and efforts between these two contests. The first author acknowledges financial support from ECO2008-04756 (Grupo Consolidado-C), SGR2014-515 and PROMETEO/2013/037. The second author acknowledges financial suppor...

  10. Policy Implementation Decentralization Government in Indonesia

    Directory of Open Access Journals (Sweden)

    Kardin M. Simanjuntak

    2015-06-01

    Full Text Available Decentralization in Indonesia is that reforms not completed and until the current implementation is not maximized or have not been successful. The essence of decentralization is internalising cost and benefit' for the people and how the government closer to the people. That's the most important essence of essence 'decentralization’. However, the implementation of decentralization in Indonesia is still far from the expectations. It is shown that only benefits of decentralization elite and local authorities, decentralization is a neo-liberal octopus, decentralization of public services are lacking in character, decentralization without institutional efficiency, decentralization fosters corruption in the area, and quasi-fiscal decentralization.

  11. Decentral Smart Grid Control

    Science.gov (United States)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals.

  12. Decentral Smart Grid Control

    International Nuclear Information System (INIS)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals. (paper)

  13. Decentralized control of complex systems

    CERN Document Server

    Siljak, Dragoslav D

    2011-01-01

    Complex systems require fast control action in response to local input, and perturbations dictate the use of decentralized information and control structures. This much-cited reference book explores the approaches to synthesizing control laws under decentralized information structure constraints.Starting with a graph-theoretic framework for structural modeling of complex systems, the text presents results related to robust stabilization via decentralized state feedback. Subsequent chapters explore optimization, output feedback, the manipulative power of graphs, overlapping decompositions and t

  14. Decentralized central heating

    Energy Technology Data Exchange (ETDEWEB)

    Savic, S.; Hudjera, A.

    1994-08-04

    The decentralized central heating is essentially based on new technical solutions for an independent heating unit, which allows up to 20% collectible energy savings and up to 15% savings in built-in-material. These savings are already made possible by the fact that the elements described under point A are thus eliminated from the classical heating. The thus superfluous made elements are replaced by new technical solutions described under point B - technical problem - and point E - patent claim. The technical solutions described in detail under point B and point E form together a technical unit and are essential parts of the invention protected by the patent. (author)

  15. Decentring the Creative Self

    DEFF Research Database (Denmark)

    Glaveanu, Vlad Petre; Lubart, Todd

    2014-01-01

    to themes depicting the interaction between these different others and the creator. Findings reveal both similarities and differences across the five domains in terms of the specific contribution of others to the creative process. Social interactions play a key formative, regulatory, motivational...... and informational role in relation to creative work. From ‘internalized’ to ‘distant’, other people are an integral part of the equation of creativity calling for a de-centring of the creative self and its re-centring in a social space of actions and interactions....

  16. Workflow management: an overview

    NARCIS (Netherlands)

    Ouyang, C.; Adams, M.; Wynn, M.T.; Hofstede, ter A.H.M.; Brocke, vom J.; Rosemann, M.

    2010-01-01

    Workflow management has its origin in the office automation systems of the seventies, but it is not until fairly recently that conceptual and technological breakthroughs have led to its widespread adoption. In fact, nowadays, processawareness has become an accepted and integral part of various types

  17. On Decentralization and Life Satisfaction

    DEFF Research Database (Denmark)

    Bjørnskov, Christian; Dreher, Axel; Fischer, Justina A.V.

    2008-01-01

    We empirically analyze the impact of fiscal and political decentralization on subjective well-being in a cross-section of 60,000 individuals from 66 countries. More spending or revenue decentralization raises well-being while greater local autonomy is beneficial only via government consumption sp...

  18. A contingency approach to decentralization

    NARCIS (Netherlands)

    Fleurke, F.; Hulst, J.R.

    2006-01-01

    After decades of centralization, in 1980 the central government of the Netherlands embarked upon an ambitious project to decentralize the administrative system. It proclaimed a series of general decentralization measures that aimed to improve the performance of the administrative system and to boost

  19. Ferret Workflow Anomaly Detection System

    National Research Council Canada - National Science Library

    Smith, Timothy J; Bryant, Stephany

    2005-01-01

    The Ferret workflow anomaly detection system project 2003-2004 has provided validation and anomaly detection in accredited workflows in secure knowledge management systems through the use of continuous, automated audits...

  20. Decentralizing the Team Station: Simulation before Reality as a Best-Practice Approach.

    Science.gov (United States)

    Charko, Jackie; Geertsen, Alice; O'Brien, Patrick; Rouse, Wendy; Shahid, Ammarah; Hardenne, Denise

    2016-01-01

    The purpose of this article is to share the logistical planning requirements and simulation experience of one Canadian hospital as it prepared its staff for the change from a centralized inpatient unit model to the decentralized design planned for its new community hospital. With the commitment and support of senior leadership, project management resources and clinical leads worked collaboratively to design a decentralized prototype in the form of a pod-style environment in the hospital's current setting. Critical success factors included engaging the right stakeholders, providing an opportunity to test new workflows and technology, creating a strong communication plan and building on lessons learned as subsequent pod prototypes are launched.

  1. Exploring Dental Providers' Workflow in an Electronic Dental Record Environment.

    Science.gov (United States)

    Schwei, Kelsey M; Cooper, Ryan; Mahnke, Andrea N; Ye, Zhan; Acharya, Amit

    2016-01-01

    A workflow is defined as a predefined set of work steps and partial ordering of these steps in any environment to achieve the expected outcome. Few studies have investigated the workflow of providers in a dental office. It is important to understand the interaction of dental providers with the existing technologies at point of care to assess breakdown in the workflow which could contribute to better technology designs. The study objective was to assess electronic dental record (EDR) workflows using time and motion methodology in order to identify breakdowns and opportunities for process improvement. A time and motion methodology was used to study the human-computer interaction and workflow of dental providers with an EDR in four dental centers at a large healthcare organization. A data collection tool was developed to capture the workflow of dental providers and staff while they interacted with an EDR during initial, planned, and emergency patient visits, and at the front desk. Qualitative and quantitative analysis was conducted on the observational data. Breakdowns in workflow were identified while posting charges, viewing radiographs, e-prescribing, and interacting with patient scheduler. EDR interaction time was significantly different between dentists and dental assistants (6:20 min vs. 10:57 min, p = 0.013) and between dentists and dental hygienists (6:20 min vs. 9:36 min, p = 0.003). On average, a dentist spent far less time than dental assistants and dental hygienists in data recording within the EDR.

  2. Coalition or decentralization

    DEFF Research Database (Denmark)

    Mahdiraji, Hannan Amoozad; Govindan, Kannan; Zavadskas, Edmundas Kazimieras

    2014-01-01

    retailers. The Nash equilibrium and definition are used bearing in mind inventory and pricing and marketing cost as decision variables for this matter. This paper studies a three-echelon supply chain network and focuses on the value of integrating a pair of partners in the chain. In the decentralized case......, the supplier sets its own price, the manufacturer points out order quantity, wholesale price and backorder quantity, and the retailer charges the final retail price of the product and marketing product. Though there are multiple players at a single echelon level, each manufacturer supplies only a specific...... to enforce marketing effort any more. Supplier-manufacturer integration brings similar benefits. Under each scenario, all parties involved simultaneously set their strategies. Through a numerical experiment, 17 design cases (through designing experiments) have been developed and the total profit...

  3. Workflow User Interfaces Patterns

    Directory of Open Access Journals (Sweden)

    Jean Vanderdonckt

    2012-03-01

    Full Text Available Este trabajo presenta una colección de patrones de diseño de interfaces de usuario para sistemas de información para el flujo de trabajo; la colección incluye cuarenta y tres patrones clasificados en siete categorías identificados a partir de la lógica del ciclo de vida de la tarea sobre la base de la oferta y la asignación de tareas a los responsables de realizarlas (i. e. recursos humanos durante el flujo de trabajo. Cada patrón de la interfaz de usuario de flujo de trabajo (WUIP, por sus siglas en inglés se caracteriza por las propiedades expresadas en el lenguaje PLML para expresar patrones y complementado por otros atributos y modelos que se adjuntan a dicho modelo: la interfaz de usuario abstracta y el modelo de tareas correspondiente. Estos modelos se especifican en un lenguaje de descripción de interfaces de usuario. Todos los WUIPs se almacenan en una biblioteca y se pueden recuperar a través de un editor de flujo de trabajo que vincula a cada patrón de asignación de trabajo a su WUIP correspondiente.A collection of user interface design patterns for workflow information systems is presented that contains forty three resource patterns classified in seven categories. These categories and their corresponding patterns have been logically identified from the task life cycle based on offering and allocation operations. Each Workflow User Interface Pattern (WUIP is characterized by properties expressed in the PLML markup language for expressing patterns and augmented by additional attributes and models attached to the pattern: the abstract user interface and the corresponding task model. These models are specified in a User Interface Description Language. All WUIPs are stored in a library and can be retrieved within a workflow editor that links each workflow pattern to its corresponding WUIP, thus giving rise to a user interface for each workflow pattern.

  4. Decentralized Quasi-Newton Methods

    Science.gov (United States)

    Eisen, Mark; Mokhtari, Aryan; Ribeiro, Alejandro

    2017-05-01

    We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems. The D-BFGS method is of interest in problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not readily available, making second order decentralized methods impossible. D-BFGS is a fully distributed algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition. We additionally provide a formulation of the algorithm in asynchronous settings. Convergence of D-BFGS is established formally in both the synchronous and asynchronous settings and strong performance advantages relative to first order methods are shown numerically.

  5. Decentralized Bribery and Market Participation

    OpenAIRE

    Popov, Sergey V.

    2012-01-01

    I propose a bribery model that examines decentralized bureaucratic decision-making. There are multiple stable equilibria. High levels of bribery reduce an economy's productivity because corruption suppresses small business, and reduces the total graft, even though the size of an individual bribe might increase. Decentralization prevents movement towards a Pareto-dominant equilibrium. Anticorruption efforts, even temporary ones, might be useful to improve participation, if they lower the bribe...

  6. Game-Based Virtual Worlds as Decentralized Virtual Activity Systems

    Science.gov (United States)

    Scacchi, Walt

    There is widespread interest in the development and use of decentralized systems and virtual world environments as possible new places for engaging in collaborative work activities. Similarly, there is widespread interest in stimulating new technological innovations that enable people to come together through social networking, file/media sharing, and networked multi-player computer game play. A decentralized virtual activity system (DVAS) is a networked computer supported work/play system whose elements and social activities can be both virtual and decentralized (Scacchi et al. 2008b). Massively multi-player online games (MMOGs) such as World of Warcraft and online virtual worlds such as Second Life are each popular examples of a DVAS. Furthermore, these systems are beginning to be used for research, deve-lopment, and education activities in different science, technology, and engineering domains (Bainbridge 2007, Bohannon et al. 2009; Rieber 2005; Scacchi and Adams 2007; Shaffer 2006), which are also of interest here. This chapter explores two case studies of DVASs developed at the University of California at Irvine that employ game-based virtual worlds to support collaborative work/play activities in different settings. The settings include those that model and simulate practical or imaginative physical worlds in different domains of science, technology, or engineering through alternative virtual worlds where players/workers engage in different kinds of quests or quest-like workflows (Jakobsson 2006).

  7. SPECT/CT workflow and imaging protocols

    Energy Technology Data Exchange (ETDEWEB)

    Beckers, Catherine [University Hospital of Liege, Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, Liege (Belgium); Hustinx, Roland [University Hospital of Liege, Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, Liege (Belgium); Domaine Universitaire du Sart Tilman, Service de Medecine Nucleaire et Imagerie Oncologique, CHU de Liege, Liege (Belgium)

    2014-05-15

    Introducing a hybrid imaging method such as single photon emission computed tomography (SPECT)/CT greatly alters the routine in the nuclear medicine department. It requires designing new workflow processes and the revision of original scheduling process and imaging protocols. In addition, the imaging protocol should be adapted for each individual patient, so that performing CT is fully justified and the CT procedure is fully tailored to address the clinical issue. Such refinements often occur before the procedure is started but may be required at some intermediate stage of the procedure. Furthermore, SPECT/CT leads in many instances to a new partnership with the radiology department. This article presents practical advice and highlights the key clinical elements which need to be considered to help understand the workflow process of SPECT/CT and optimise imaging protocols. The workflow process using SPECT/CT is complex in particular because of its bimodal character, the large spectrum of stakeholders, the multiplicity of their activities at various time points and the need for real-time decision-making. With help from analytical tools developed for quality assessment, the workflow process using SPECT/CT may be separated into related, but independent steps, each with its specific human and material resources to use as inputs or outputs. This helps identify factors that could contribute to failure in routine clinical practice. At each step of the process, practical aspects to optimise imaging procedure and protocols are developed. A decision-making algorithm for justifying each CT indication as well as the appropriateness of each CT protocol is the cornerstone of routine clinical practice using SPECT/CT. In conclusion, implementing hybrid SPECT/CT imaging requires new ways of working. It is highly rewarding from a clinical perspective, but it also proves to be a daily challenge in terms of management. (orig.)

  8. SPECT/CT workflow and imaging protocols

    International Nuclear Information System (INIS)

    Beckers, Catherine; Hustinx, Roland

    2014-01-01

    Introducing a hybrid imaging method such as single photon emission computed tomography (SPECT)/CT greatly alters the routine in the nuclear medicine department. It requires designing new workflow processes and the revision of original scheduling process and imaging protocols. In addition, the imaging protocol should be adapted for each individual patient, so that performing CT is fully justified and the CT procedure is fully tailored to address the clinical issue. Such refinements often occur before the procedure is started but may be required at some intermediate stage of the procedure. Furthermore, SPECT/CT leads in many instances to a new partnership with the radiology department. This article presents practical advice and highlights the key clinical elements which need to be considered to help understand the workflow process of SPECT/CT and optimise imaging protocols. The workflow process using SPECT/CT is complex in particular because of its bimodal character, the large spectrum of stakeholders, the multiplicity of their activities at various time points and the need for real-time decision-making. With help from analytical tools developed for quality assessment, the workflow process using SPECT/CT may be separated into related, but independent steps, each with its specific human and material resources to use as inputs or outputs. This helps identify factors that could contribute to failure in routine clinical practice. At each step of the process, practical aspects to optimise imaging procedure and protocols are developed. A decision-making algorithm for justifying each CT indication as well as the appropriateness of each CT protocol is the cornerstone of routine clinical practice using SPECT/CT. In conclusion, implementing hybrid SPECT/CT imaging requires new ways of working. It is highly rewarding from a clinical perspective, but it also proves to be a daily challenge in terms of management. (orig.)

  9. Data Workflow - A Workflow Model for Continuous Data Processing

    NARCIS (Netherlands)

    Wombacher, Andreas

    2010-01-01

    Online data or streaming data are getting more and more important for enterprise information systems, e.g. by integrating sensor data and workflows. The continuous flow of data provided e.g. by sensors requires new workflow models addressing the data perspective of these applications, since

  10. Parametric Room Acoustic Workflows

    DEFF Research Database (Denmark)

    Parigi, Dario; Svidt, Kjeld; Molin, Erik

    2017-01-01

    The paper investigates and assesses different room acoustics software and the opportunities they offer to engage in parametric acoustics workflow and to influence architectural designs. The first step consists in the testing and benchmarking of different tools on the basis of accuracy, speed...... and interoperability with Grasshopper 3d. The focus will be placed to the benchmarking of three different acoustic analysis tools based on raytracing. To compare the accuracy and speed of the acoustic evaluation across different tools, a homogeneous set of acoustic parameters is chosen. The room acoustics parameters...... included in the set are reverberation time (EDT, RT30), clarity (C50), loudness (G), and definition (D50). Scenarios are discussed for determining at different design stages the most suitable acoustic tool. Those scenarios are characterized, by the use of less accurate but fast evaluation tools to be used...

  11. Implementing Oracle Workflow

    CERN Document Server

    Mathieson, D W

    1999-01-01

    CERN (see [CERN]) is the world's largest physics research centre. Currently there are around 5,000 people working at the CERN site located on the border of France and Switzerland near Geneva along with another 4,000 working remotely at institutes situated all around the globe. CERN is currently working on the construction of our newest scientific instrument called the Large Hadron Collider (LHC); the construction alone of this 27-kilometre particle accelerator will not complete until 2005. Like many businesses in the current economic climate CERN is expected to continue growing, yet staff numbers are planned to fall in the coming years. In essence, do more with less. In an environment such as this, it is critical that the administration is as efficient as possible. One of the ways that administrative procedures are streamlined is by the use of an organisation-wide workflow system.

  12. Digital workflows in contemporary orthodontics

    Directory of Open Access Journals (Sweden)

    Lars R Christensen

    2017-01-01

    Full Text Available Digital workflows are now increasingly possible in orthodontic practice. Workflows designed to improve the customization of orthodontic appliances are now available through laboratories and orthodontic manufacturing facilities in many parts of the world. These now have the potential to improve certain aspects of patient care.

  13. Robust Decentralized Formation Flight Control

    Directory of Open Access Journals (Sweden)

    Zhao Weihua

    2011-01-01

    Full Text Available Motivated by the idea of multiplexed model predictive control (MMPC, this paper introduces a new framework for unmanned aerial vehicles (UAVs formation flight and coordination. Formulated using MMPC approach, the whole centralized formation flight system is considered as a linear periodic system with control inputs of each UAV subsystem as its periodic inputs. Divided into decentralized subsystems, the whole formation flight system is guaranteed stable if proper terminal cost and terminal constraints are added to each decentralized MPC formulation of the UAV subsystem. The decentralized robust MPC formulation for each UAV subsystem with bounded input disturbances and model uncertainties is also presented. Furthermore, an obstacle avoidance control scheme for any shape and size of obstacles, including the nonapriorily known ones, is integrated under the unified MPC framework. The results from simulations demonstrate that the proposed framework can successfully achieve robust collision-free formation flights.

  14. Trends in research on forestry decentralization policies

    DEFF Research Database (Denmark)

    Lund, Jens Friis; Rutt, Rebecca Leigh; Ribot, Jesse

    2018-01-01

    institutions; studies focusing on power and the role of elites in forestry decentralization, and; studies that historicize and contextualize forestry decentralization as reflective of broader societal phenomena. We argue that these strands reflect disciplinary differences in values, epistemologies, and methods...

  15. Query Optimizations over Decentralized RDF Graphs

    KAUST Repository

    Abdelaziz, Ibrahim; Mansour, Essam; Ouzzani, Mourad; Aboulnaga, Ashraf; Kalnis, Panos

    2017-01-01

    Applications in life sciences, decentralized social networks, Internet of Things, and statistical linked dataspaces integrate data from multiple decentralized RDF graphs via SPARQL queries. Several approaches have been proposed to optimize query

  16. Proportional green time scheduling for traffic lights

    NARCIS (Netherlands)

    P. Kovacs; Le, T. (Tung); R. Núñez Queija (Rudesindo); Vu, H. (Hai); N. Walton

    2016-01-01

    textabstractWe consider the decentralized scheduling of a large number of urban traffic lights. We investigate factors determining system performance, in particular, the length of the traffic light cycle and the proportion of green time allocated to each junction. We study the effect of the length

  17. Decentralized Control of Autonomous Vehicles

    Science.gov (United States)

    2003-01-01

    Autonomous Vehicles by John S. Baras, Xiaobo Tan, Pedram Hovareshti CSHCN TR 2003-8 (ISR TR 2003-14) Report Documentation Page Form ApprovedOMB No. 0704...AND SUBTITLE Decentralized Control of Autonomous Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Decentralized Control of Autonomous Vehicles ∗ John S. Baras, Xiaobo Tan, and Pedram

  18. Ubiquitous consultation tool for decentral knowledge workers

    OpenAIRE

    Nazari Shirehjini, A.A.; Rühl, C.; Noll, S.

    2003-01-01

    The special issue of this initial study is to examine the current work situation of consulting companies, and to elaborate a concept for supporting decentralized working consultants. The concept addresses significant challenges of decentralized work processes by deploying the Peer-to-Peer methodology to decentralized expert and Knowledge Management, cooperation, and enterprise resource planning.

  19. Decentralized control: Status and outlook

    Czech Academy of Sciences Publication Activity Database

    Bakule, Lubomír

    2014-01-01

    Roč. 38, č. 1 (2014), s. 71-80 ISSN 1367-5788 R&D Projects: GA ČR GA13-02149S Institutional support: RVO:67985556 Keywords : decentralized control * networked control systems * event-triggered approach Subject RIV: BC - Control Systems Theory Impact factor: 2.518, year: 2014

  20. Peer Matcher : Decentralized Partnership Formation

    NARCIS (Netherlands)

    Bozdog, Nicolae Vladimir; Voulgaris, Spyros; Bal, Henri; van Halteren, Aart

    2015-01-01

    This paper presents Peer Matcher, a fully decentralized algorithm solving the k-clique matching problem. The aim of k-clique matching is to cluster a set of nodes having pair wise weights into k-size groups of maximal total weight. Since solving the problem requires exponential time, Peer Matcher

  1. Music Libraries: Centralization versus Decentralization.

    Science.gov (United States)

    Kuyper-Rushing, Lois

    2002-01-01

    Considers the decision that branch libraries, music libraries in particular, have struggled with concerning a centralized location in the main library versus a decentralized collection. Reports on a study of the Association of Research Libraries that investigated the location of music libraries, motivation for the location, degrees offered,…

  2. Decentralized controller gain scheduling using PSO for power ...

    African Journals Online (AJOL)

    user

    It gives approximately two-thirds of the total power output of a typical combined cycle plant (S. Barsali, et al 2008, M. Nagpal, et al,. 2001). When the load is suddenly increased the speed drops quickly, but the regulator reacts and increases the fuel flow to a maximum of 100% thereby improving the efficiency of the system.

  3. Decentralized controller gain scheduling using PSO for power ...

    African Journals Online (AJOL)

    For this reason, in this study the P and I control parameters are tuned based on Particle Swarm Optimization (PSO) algorithm for a better Load-Frequency Control in a Two-Area Two-Unit Thermal Reheat Power System (TATURIPS) with step load perturbation. To exemplify the optimum parameter search PSO is used as it is ...

  4. Exploring Dental Providers’ Workflow in an Electronic Dental Record Environment

    Science.gov (United States)

    Schwei, Kelsey M; Cooper, Ryan; Mahnke, Andrea N.; Ye, Zhan

    2016-01-01

    Summary Background A workflow is defined as a predefined set of work steps and partial ordering of these steps in any environment to achieve the expected outcome. Few studies have investigated the workflow of providers in a dental office. It is important to understand the interaction of dental providers with the existing technologies at point of care to assess breakdown in the workflow which could contribute to better technology designs. Objective The study objective was to assess electronic dental record (EDR) workflows using time and motion methodology in order to identify breakdowns and opportunities for process improvement. Methods A time and motion methodology was used to study the human-computer interaction and workflow of dental providers with an EDR in four dental centers at a large healthcare organization. A data collection tool was developed to capture the workflow of dental providers and staff while they interacted with an EDR during initial, planned, and emergency patient visits, and at the front desk. Qualitative and quantitative analysis was conducted on the observational data. Results Breakdowns in workflow were identified while posting charges, viewing radiographs, e-prescribing, and interacting with patient scheduler. EDR interaction time was significantly different between dentists and dental assistants (6:20 min vs. 10:57 min, p = 0.013) and between dentists and dental hygienists (6:20 min vs. 9:36 min, p = 0.003). Conclusions On average, a dentist spent far less time than dental assistants and dental hygienists in data recording within the EDR. PMID:27437058

  5. Application of Workflow Technology for Big Data Analysis Service

    Directory of Open Access Journals (Sweden)

    Bin Zhang

    2018-04-01

    Full Text Available This study presents a lightweight representational state transfer-based cloud workflow system to construct a big data intelligent software-as-a-service (SaaS platform. The system supports the dynamic construction and operation of an intelligent data analysis application, and realizes rapid development and flexible deployment of the business analysis process that can improve the interaction and response time of the process. The proposed system integrates offline-batch and online-streaming analysis models that allow users to conduct batch and streaming computing simultaneously. Users can rend cloud capabilities and customize a set of big data analysis applications in the form of workflow processes. This study elucidates the architecture and application modeling, customization, dynamic construction, and scheduling of a cloud workflow system. A chain workflow foundation mechanism is proposed to combine several analysis components into a chain component that can promote efficiency. Four practical application cases are provided to verify the analysis capability of the system. Experimental results show that the proposed system can support multiple users in accessing the system concurrently and effectively uses data analysis algorithms. The proposed SaaS workflow system has been used in network operators and has achieved good results.

  6. Integration of the radiotherapy irradiation planning in the digital workflow

    International Nuclear Information System (INIS)

    Roehner, F.; Schmucker, M.; Henne, K.; Bruggmoser, G.; Grosu, A.L.; Frommhold, H.; Heinemann, F.E.; Momm, F.

    2013-01-01

    Background and purpose: At the Clinic of Radiotherapy at the University Hospital Freiburg, all relevant workflow is paperless. After implementing the Operating Schedule System (OSS) as a framework, all processes are being implemented into the departmental system MOSAIQ. Designing a digital workflow for radiotherapy irradiation planning is a large challenge, it requires interdisciplinary expertise and therefore the interfaces between the professions also have to be interdisciplinary. For every single step of radiotherapy irradiation planning, distinct responsibilities have to be defined and documented. All aspects of digital storage, backup and long-term availability of data were considered and have already been realized during the OSS project. Method: After an analysis of the complete workflow and the statutory requirements, a detailed project plan was designed. In an interdisciplinary workgroup, problems were discussed and a detailed flowchart was developed. The new functionalities were implemented in a testing environment by the Clinical and Administrative IT Department (CAI). After extensive tests they were integrated into the new modular department system. Results and conclusion: The Clinic of Radiotherapy succeeded in realizing a completely digital workflow for radiotherapy irradiation planning. During the testing phase, our digital workflow was examined and afterwards was approved by the responsible authority. (orig.)

  7. Design of an autonomous decentralized MAC protocol for wireless sensor networks

    NARCIS (Netherlands)

    van Hoesel, L.F.W.; Dal Pont, L.; Havinga, Paul J.M.

    In this document the design of a MAC protocol for wireless sensor networks is discussed. The autonomous decentralized TDMA based MAC protocol minimizes power consumtion by efficiency implementing unicast/omnicast, scheduled rendezvous times and wakeup calls. The MAC protocol is an ongoing research

  8. ATLAS Grid Workflow Performance Optimization

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment grid workflow system manages routinely 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG. At this scale small improvements in the software and computing performance and workflows can lead to significant resource usage gains. ATLAS is reviewing together with CERN IT experts several typical simulation and data processing workloads for potential performance improvements in terms of memory and CPU usage, disk and network I/O. All ATLAS production and analysis grid jobs are instrumented to collect many performance metrics for detailed statistical studies using modern data analytics tools like ElasticSearch and Kibana. This presentation will review and explain the performance gains of several ATLAS simulation and data processing workflows and present analytics studies of the ATLAS grid workflows.

  9. Privacy-aware workflow management

    NARCIS (Netherlands)

    Alhaqbani, B.; Adams, M.; Fidge, C.J.; Hofstede, ter A.H.M.; Glykas, M.

    2013-01-01

    Information security policies play an important role in achieving information security. Confidentiality, Integrity, and Availability are classic information security goals attained by enforcing appropriate security policies. Workflow Management Systems (WfMSs) also benefit from inclusion of these

  10. Summer Student Report - AV Workflow

    CERN Document Server

    Abramson, Jessie

    2014-01-01

    The AV Workflow is web application which allows cern users to publish, update and delete videos from cds. During my summer internship I implemented the backend of the new version of the AV Worklow in python using the django framework.

  11. Location-based Scheduling

    DEFF Research Database (Denmark)

    Andersson, Niclas; Christensen, Knud

    on the market. However, CPM is primarily an activity based method that takes the activity as the unit of focus and there is criticism raised, specifically in the case of construction projects, on the method for deficient management of construction work and continuous flow of resources. To seek solutions...... to the identified limitations of the CPM method, an alternative planning and scheduling methodology that includes locations is tested. Location-based Scheduling (LBS) implies a shift in focus, from primarily the activities to the flow of work through the various locations of the project, i.e. the building. LBS uses...... the graphical presentation technique of Line-of-balance, which is adapted for planning and management of work-flows that facilitates resources to perform their work without interruptions caused by other resources working with other activities in the same location. As such, LBS and Lean Construction share...

  12. The Two Edge Knife of Decentralization

    Directory of Open Access Journals (Sweden)

    Ahmad Khoirul Umam

    2011-07-01

    Full Text Available A centralistic government model has become a trend in a number of developing countries, in which the ideosycretic aspect becomes pivotal key in the policy making. The situation constitutes authoritarianism, cronyism, and corruption. To break the impasse, the decentralized system is proposed to make people closer to the public policy making. Decentralization is also convinced to be the solution to create a good governance. But a number of facts in the developing countries demonstrates that decentralization indeed has ignite emerges backfires such as decentralized corruption, parochialism, horizontal conflict, local political instability and others. This article elaborates the theoretical framework on decentralization's ouput as the a double-edge knife. In a simple words, the concept of decentralization does not have a permanent relationship with the creation of good governance and development. Without substantive democracy, decentralization is indeed potential to be a destructive political instrument threating the state's future.

  13. Decentralized neural control application to robotics

    CERN Document Server

    Garcia-Hernandez, Ramon; Sanchez, Edgar N; Alanis, Alma y; Ruz-Hernandez, Jose A

    2017-01-01

    This book provides a decentralized approach for the identification and control of robotics systems. It also presents recent research in decentralized neural control and includes applications to robotics. Decentralized control is free from difficulties due to complexity in design, debugging, data gathering and storage requirements, making it preferable for interconnected systems. Furthermore, as opposed to the centralized approach, it can be implemented with parallel processors. This approach deals with four decentralized control schemes, which are able to identify the robot dynamics. The training of each neural network is performed on-line using an extended Kalman filter (EKF). The first indirect decentralized control scheme applies the discrete-time block control approach, to formulate a nonlinear sliding manifold. The second direct decentralized neural control scheme is based on the backstepping technique, approximated by a high order neural network. The third control scheme applies a decentralized neural i...

  14. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  15. Workflow-Based Software Development Environment

    Science.gov (United States)

    Izygon, Michel E.

    2013-01-01

    The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment

  16. Coordinating decentralized optimization of truck and shovel mining operations

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, R.; Fraser Forbes, J. [Alberta Univ., Edmonton, AB (Canada). Dept. of Chemical and Materials Engineering; San Yip, W. [Suncor Energy, Fort McMurray, AB (Canada)

    2006-07-01

    Canada's oil sands contain the largest known reserve of oil in the world. Oil sands mining uses 3 functional processes, ore hauling, overburden removal and mechanical maintenance. The industry relies mainly on truck-and-shovel technology in its open-pit mining operations which contributes greatly to the overall mining operation cost. Coordination between operating units is crucial for achieving an enterprise-wide optimal operation level. Some of the challenges facing the industry include multiple or conflicting objectives such as minimizing the use of raw materials and energy while maximizing production. The large sets of constraints that define the feasible domain pose as challenge, as does the uncertainty in system parameters. One solution lies in assigning truck resources to various activities. This fully decentralized approach would treat the optimization of ore production, waste removal and equipment maintenance independently. It was emphasized that mine-wide optimal operation can only be achieved by coordinating ore hauling and overburden removal processes. For that reason, this presentation proposed a coordination approach for a decentralized optimization system. The approach is based on the Dantzig-Wolfe decomposition and auction-based methods that have been previously used to decompose large-scale optimization problems. The treatment of discrete variables and coordinator design was described and the method was illustrated with a simple truck and shovel mining simulation study. The approach can be applied to a wide range of applications such as coordinating decentralized optimal control systems and scheduling. 16 refs., 3 tabs., 2 figs.

  17. Centralized or decentralized electricity production

    International Nuclear Information System (INIS)

    Boer, H.A. de.

    1975-01-01

    Because of low overall efficiency in electric power generation, it is argued that energy provision based on gas, combined with locally decentralized electricity production, saves for the Netherlands slightly more fossile fuel than nuclear technologies and makes the country independent of uranium resources. The reason the Netherlands persues this approach is that a big part of the energy is finally used for heating in the normal or moderate temperatures

  18. Problems in decentralized data acquisition

    International Nuclear Information System (INIS)

    Eder, R.

    1985-04-01

    This paper describes INIS (International Nuclear Information System) which is operated by the International Atomic Energy Agency (IAEA) in collaboration with 73 Member States and 14 international organizations. INIS is a computerized system for collecting, processing and disseminating nuclear information. The collection and scanning of literature, input preparation and the dissemination of output are completely decentralized, the checking and merging of the information data are centralized. This paper shows the structure, management, processing and problem areas of this system. (Author)

  19. PRACTICAL IMPLICATIONS OF LOCATION-BASED SCHEDULING

    DEFF Research Database (Denmark)

    Andersson, Niclas; Christensen, Knud

    2007-01-01

    The traditional method for planning, scheduling and controlling activities and resources in construction projects is the CPM-scheduling, which has been the predominant scheduling method since its introduction in the late 1950s. Over the years, CPM has proven to be a very powerful technique...... that will be used in this study. LBS is a scheduling method that rests upon the theories of line-of-balance and which uses the graphic representation of a flowline chart. As such, LBS is adapted for planning and management of workflows and, thus, may provide a solution to the identified shortcomings of CPM. Even...

  20. Workflow management in large distributed systems

    International Nuclear Information System (INIS)

    Legrand, I; Newman, H; Voicu, R; Dobre, C; Grigoras, C

    2011-01-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  1. Workflow management in large distributed systems

    Science.gov (United States)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  2. Lift scheduling organization : Lift Concept for Lemminkainen

    OpenAIRE

    Mingalimov, Iurii

    2015-01-01

    The purpose of the work was to make a simple schedule for the main contractors and clients to check and control workflow connected with lifts. It gathers works with electricity, construction, engineering networks, installing equipment and commissioning works. The schedule was carried out during working on the building site Aino in Saint Petersburg in Lemminkӓinen. The duration of work was 5 months. The lift concept in Lemminkӓinen is very well controlled in comparison with other buil...

  3. Wage Dispersion and Decentralization of Wage Bargaining

    DEFF Research Database (Denmark)

    Dahl, Christian M.; Le Maire, Christian Daniel; Munch, Jakob Roland

    in the individual worker's wage-setting system that facilitates identification of the effects of decentralization. Consistent with predictions we find that wages are more dispersed under firm-level bargaining compared to more centralized wage-setting systems. However, the differences across wage-setting systems......This paper studies how decentralization of wage bargaining from sector to firm level influences wage levels and wage dispersion. We use a detailed panel data set covering a period of decentralization in the Danish labor market. The decentralization process provides exogenous variation...

  4. Concurrency & Asynchrony in Declarative Workflows

    DEFF Research Database (Denmark)

    Debois, Søren; Hildebrandt, Thomas; Slaats, Tijs

    2015-01-01

    of concurrency in DCR Graphs admits asynchronous execution of declarative workflows both conceptually and by reporting on a prototype implementation of a distributed declarative workflow engine. Both the theoretical development and the implementation is supported by an extended example; moreover, the theoretical....... In this paper, we pro- pose a notion of concurrency for declarative process models, formulated in the context of Dynamic Condition Response (DCR) graphs, and exploiting the so-called “true concurrency” semantics of Labelled Asynchronous Transition Systems. We demonstrate how this semantic underpinning...

  5. Similarity measures for scientific workflows

    OpenAIRE

    Starlinger, Johannes

    2016-01-01

    In Laufe der letzten zehn Jahre haben Scientific Workflows als Werkzeug zur Erstellung von reproduzierbaren, datenverarbeitenden in-silico Experimenten an Aufmerksamkeit gewonnen, in die sowohl lokale Skripte und Anwendungen, als auch Web-Services eingebunden werden können. Über spezialisierte Online-Bibliotheken, sogenannte Repositories, können solche Workflows veröffentlicht und wiederverwendet werden. Mit zunehmender Größe dieser Repositories werden Ähnlichkeitsmaße für Scientific Workfl...

  6. Scientific Workflow Management in Proteomics

    Science.gov (United States)

    de Bruin, Jeroen S.; Deelder, André M.; Palmblad, Magnus

    2012-01-01

    Data processing in proteomics can be a challenging endeavor, requiring extensive knowledge of many different software packages, all with different algorithms, data format requirements, and user interfaces. In this article we describe the integration of a number of existing programs and tools in Taverna Workbench, a scientific workflow manager currently being developed in the bioinformatics community. We demonstrate how a workflow manager provides a single, visually clear and intuitive interface to complex data analysis tasks in proteomics, from raw mass spectrometry data to protein identifications and beyond. PMID:22411703

  7. Analysing scientific workflows: Why workflows not only connect web services

    NARCIS (Netherlands)

    Wassink, I.; van der Vet, P.E.; Wolstencroft, K.; Neerincx, P.B.T.; Roos, M.; Rauwerda, H.; Breit, T.M.; Zhang, L.J.

    2009-01-01

    Life science workflow systems are developed to help life scientists to conveniently connect various programs and web services. In practice however, much time is spent on data conversion, because web services provided by different organisations use different data formats. We have analysed all the

  8. Analysing scientific workflows: why workflows not only connect web services

    NARCIS (Netherlands)

    Wassink, I.; van der Vet, P.E.; Wolstencroft, K.; Neerincx, P.B.T.; Roos, M.; Rauwerda, H.; Breit, T.M.; Zhang, LJ.

    2009-01-01

    Life science workflow systems are developed to help life scientists to conveniently connect various programs and web services. In practice however, much time is spent on data conversion, because web services provided by different organisations use different data formats. We have analysed all the

  9. Office 2010 Workflow Developing Collaborative Solutions

    CERN Document Server

    Mann, David; Enterprises, Creative

    2010-01-01

    Workflow is the glue that binds information worker processes, users, and artifacts. Without workflow, information workers are just islands of data and potential. Office 2010 Workflow details how to implement workflow in SharePoint 2010 and the client Microsoft Office 2010 suite to help information workers share data, enforce processes and business rules, and work more efficiently together or solo. This book covers everything you need to know-from what workflow is all about to creating new activities; from the SharePoint Designer to Visual Studio 2010; from out-of-the-box workflows to state mac

  10. Refinery scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Magalhaes, Marcus V.; Fraga, Eder T. [PETROBRAS, Rio de Janeiro, RJ (Brazil); Shah, Nilay [Imperial College, London (United Kingdom)

    2004-07-01

    This work addresses the refinery scheduling problem using mathematical programming techniques. The solution adopted was to decompose the entire refinery model into a crude oil scheduling and a product scheduling problem. The envelope for the crude oil scheduling problem is composed of a terminal, a pipeline and the crude area of a refinery, including the crude distillation units. The solution method adopted includes a decomposition technique based on the topology of the system. The envelope for the product scheduling comprises all tanks, process units and products found in a refinery. Once crude scheduling decisions are Also available the product scheduling is solved using a rolling horizon algorithm. All models were tested with real data from PETROBRAS' REFAP refinery, located in Canoas, Southern Brazil. (author)

  11. Decentralized or Centralized Systems for Colleges and Universities?

    Science.gov (United States)

    Heydinger, Richard B.; Norris, Donald M.

    1979-01-01

    Arguments for and against decentralization of data management, analysis, and planning systems are presented. It is suggested that technological advances have encouraged decentralization. Caution in this direction is urged and the development of an articulated decentralization program is proposed. (SF)

  12. Consensus based scheduling of storage capacities in a virtual microgrid

    DEFF Research Database (Denmark)

    Brehm, Robert; Top, Søren; Mátéfi-Tempfli, Stefan

    2017-01-01

    We present a distributed, decentralized method for coordinated scheduling of charge/discharge intervals of storage capacities in a utility grid integrated microgrid. The decentralized algorithm is based on a consensus scheme and solves an optimisation problem with the objective of minimising......, by use of storage capacities, the power flow over a transformer substation from/to the utility grid integrated microgrid. It is shown that when using this coordinated scheduling algorithm, load profile flattening (peak-shaving) for the utility grid is achieved. Additionally, mutual charge...

  13. CMS Distributed Computing Workflow Experience

    CERN Document Server

    Haas, Jeffrey David

    2010-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simul...

  14. New Interactions with Workflow Systems

    NARCIS (Netherlands)

    Wassink, I.; van der Vet, P.E.; van der Veer, Gerrit C.; Roos, M.; van Dijk, Elisabeth M.A.G.; Norros, L.; Koskinen, H.; Salo, L.; Savioja, P.

    2009-01-01

    This paper describes the evaluation of our early design ideas of an ad-hoc of workflow system. Using the teach-back technique, we have performed a hermeneutic analysis of the mockup implementation named NIWS to get corrective and creative feedback at the functional, dialogue and representation level

  15. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  16. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  17. Design Tools and Workflows for Braided Structures

    DEFF Research Database (Denmark)

    Vestartas, Petras; Heinrich, Mary Katherine; Zwierzycki, Mateusz

    2017-01-01

    and merits of our method, demonstrated though four example design and analysis workflows. The workflows frame specific aspects of enquiry for the ongoing research project flora robotica. These include modelling target geometries, automatically producing instructions for fabrication, conducting structural...

  18. Decentralization in Botswana: the reluctant process | Dipholo ...

    African Journals Online (AJOL)

    Botswana\\'s decentralization process has always been justified in terms of democracy and development. Consequently, the government has always argued that it is fully committed to decentralization in order to promote popular participation as well as facilitating sustainable rural development. Yet the government does not ...

  19. Decentralized control using compositional analysis techniques

    NARCIS (Netherlands)

    Kerber, F.; van der Schaft, A. J.

    2011-01-01

    Decentralized control strategies aim at achieving a global control target by means of distributed local controllers acting on individual subsystems of the overall plant. In this sense, decentralized control is a dual problem to compositional analysis where a global verification task is decomposed

  20. Policy Recommendations on Decentralization, Local Power and ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-12-22

    Policy Recommendations on Decentralization, Local Power and Women's Rights. December 22, 2010. Image. The present document comprises a set of policy recommendations that define a global agenda on gender and decentralization. It emerged from the analysis and experiences shared during the Conference and the ...

  1. Decentralized Decision Making Toward Educational Goals.

    Science.gov (United States)

    Monahan, William W.; Johnson, Homer M.

    This monograph provides guidelines to help those school districts considering a more decentralized form of management. The authors discuss the levels at which different types of decisions should be made, describe the changing nature of the educational environment, identify different centralization-decentralization models, and suggest a flexible…

  2. What supervisors want to know about decentralization.

    Science.gov (United States)

    Boissoneau, R; Belton, P

    1991-06-01

    Many organizations in various industries have tended to move away from strict centralization, yet some centralization is still vital to top management. With 19 of the 22 executives interviewed favoring or implementing some form of decentralization, it is probable that traditionally centralized organizations will follow the trend and begin to decentralize their organizational structures. The incentives and advantages of decentralization are too attractive to ignore. Decentralization provides responsibility, clear objectives, accountability for results, and more efficient and effective decision making. However, one must remember that decentralization can be overextended and that centralization is still viable in certain functions. Finding the correct balance between control and autonomy is a key to decentralization. Too much control and too much autonomy are the primary reasons for decentralization failures. In today's changing, competitive environment, structures must be continuously redefined, with the goal of finding an optimal balance between centralization and decentralization. Organizations are cautioned not to seek out and install a single philosopher-king to impose unified direction, but to unify leadership goals, participation, style, and control to develop improved methods of making all responsible leaders of one mind about the organization's needs and goals.

  3. Decentralization or centralization: striking a balance.

    Science.gov (United States)

    Dirschel, K M

    1994-09-01

    An Executive Vice President for Nursing can provide the necessary link to meet diverse clinical demands when encountering centralization--decentralization decisions. Centralized communication links hospital departments giving nurses a unified voice. Decentralization acknowledges the need for diversity and achieves the right balance of uniformity through a responsive communications network.

  4. Wage Dispersion and Decentralization of Wage Bargaining

    DEFF Research Database (Denmark)

    Dahl, Christian Møller; le Maire, Christian Daniel; Munch, Jakob R.

    2013-01-01

    This article studies how decentralization of wage bargaining from sector to firm level influences wage levels and wage dispersion. We use detailed panel data covering a period of decentralization in the Danish labor market. The decentralization process provides variation in the individual worker......'s wage-setting system that facilitates identification of the effects of decentralization. We find a wage premium associated with firm-level bargaining relative to sector-level bargaining and that the return to skills is higher under the more decentralized wage-setting systems. Using quantile regression......, we also find that wages are more dispersed under firm-level bargaining compared to more centralized wage-setting systems....

  5. Wind Farm Decentralized Dynamic Modeling With Parameters

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Shakeri, Sayyed Mojtaba; Grunnet, Jacob Deleuran

    2010-01-01

    Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...... local models. The results of this report are especially useful, but not limited, to design a decentralized wind farm controller, since in centralized controller design one can also use the model and update it in a central computing node.......Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...

  6. Decentralized Procurement in Light of Strategic Inventories

    DEFF Research Database (Denmark)

    Frimor, Hans; Arya, Anil; Mittendorf, Brian

    2015-01-01

    The centralization versus decentralization choice is perhaps the quintessential organizational structure decision. In the operations realm, this choice is particularly critical when it comes to the procurement function. Why firms may opt to decentralize procurement has been often studied and conf......The centralization versus decentralization choice is perhaps the quintessential organizational structure decision. In the operations realm, this choice is particularly critical when it comes to the procurement function. Why firms may opt to decentralize procurement has been often studied...... and confirmed to be a multifaceted choice. This paper complements existing studies by detailing the trade-offs in the centralization versus decentralization decision in light of firm's decision to cede procurement choices to its individual devisions can help moderate inventory levels and provide a natural salve...

  7. Decentralized Energy from Waste Systems

    Directory of Open Access Journals (Sweden)

    Blanca Antizar-Ladislao

    2010-01-01

    Full Text Available In the last five years or so, biofuels have been given notable consideration worldwide as an alternative to fossil fuels, due to their potential to reduce greenhouse gas emissions by partial replacement of oil as a transport fuel. The production of biofuels using a sustainable approach, should consider local production of biofuels, obtained from local feedstocks and adapted to the socio-economical and environmental characteristics of the particular region where they are developed. Thus, decentralized energy from waste systems will exploit local biomass to optimize their production and consumption. Waste streams such as agricultural and wood residues, municipal solid waste, vegetable oils, and algae residues can all be integrated in energy from waste systems. An integral optimization of decentralized energy from waste systems should not be based on the optimization of each single process, but the overall optimization of the whole process. This is by obtaining optimal energy and environmental benefits, as well as collateral beneficial co-products such as soil fertilizers which will result in a higher food crop production and carbon dioxide fixation which will abate climate change.

  8. Decentralized energy from waste systems

    International Nuclear Information System (INIS)

    Antizar-Ladislao, B.; Turrion-Gomez, J. L.

    2010-01-01

    In the last five years or so, biofuels have been given notable consideration worldwide as an alternative to fossil fuels, due to their potential to reduce greenhouse gas emissions by partial replacement of oil as a transport fuel. The production of biofuels using a sustainable approach, should consider local production of biofuels, obtained from local feedstocks and adapted to the socio-economical and environmental characteristics of the particular region where they are developed. Thus, decentralized energy from waste systems will exploit local biomass to optimize their production and consumption. Waste streams such as agricultural and wood residues, municipal solid waste, vegetable oils, and algae residues can all be integrated in energy from waste systems. An integral optimization of decentralized energy from waste systems should not be based on the optimization of each single process, but the overall optimization of the whole process. This is by obtaining optimal energy and environmental benefits, as well as collateral beneficial co-products such as soil fertilizers which will result in a higher food crop production and carbon dioxide fixation which will abate climate change. (author)

  9. The equivalency between logic Petri workflow nets and workflow nets.

    Science.gov (United States)

    Wang, Jing; Yu, ShuXia; Du, YuYue

    2015-01-01

    Logic Petri nets (LPNs) can describe and analyze batch processing functions and passing value indeterminacy in cooperative systems. Logic Petri workflow nets (LPWNs) are proposed based on LPNs in this paper. Process mining is regarded as an important bridge between modeling and analysis of data mining and business process. Workflow nets (WF-nets) are the extension to Petri nets (PNs), and have successfully been used to process mining. Some shortcomings cannot be avoided in process mining, such as duplicate tasks, invisible tasks, and the noise of logs. The online shop in electronic commerce in this paper is modeled to prove the equivalence between LPWNs and WF-nets, and advantages of LPWNs are presented.

  10. The Equivalency between Logic Petri Workflow Nets and Workflow Nets

    Science.gov (United States)

    Wang, Jing; Yu, ShuXia; Du, YuYue

    2015-01-01

    Logic Petri nets (LPNs) can describe and analyze batch processing functions and passing value indeterminacy in cooperative systems. Logic Petri workflow nets (LPWNs) are proposed based on LPNs in this paper. Process mining is regarded as an important bridge between modeling and analysis of data mining and business process. Workflow nets (WF-nets) are the extension to Petri nets (PNs), and have successfully been used to process mining. Some shortcomings cannot be avoided in process mining, such as duplicate tasks, invisible tasks, and the noise of logs. The online shop in electronic commerce in this paper is modeled to prove the equivalence between LPWNs and WF-nets, and advantages of LPWNs are presented. PMID:25821845

  11. Snakemake-a scalable bioinformatics workflow engine

    NARCIS (Netherlands)

    J. Köster (Johannes); S. Rahmann (Sven)

    2012-01-01

    textabstractSnakemake is a workflow engine that provides a readable Python-based workflow definition language and a powerful execution environment that scales from single-core workstations to compute clusters without modifying the workflow. It is the first system to support the use of automatically

  12. Behavioral technique for workflow abstraction and matching

    NARCIS (Netherlands)

    Klai, K.; Ould Ahmed M'bareck, N.; Tata, S.; Dustdar, S.; Fiadeiro, J.L.; Sheth, A.

    2006-01-01

    This work is in line with the CoopFlow approach dedicated for workflow advertisement, interconnection, and cooperation in virtual organizations. In order to advertise workflows into a registry, we present in this paper a novel method to abstract behaviors of workflows into symbolic observation

  13. Schedule Analytics

    Science.gov (United States)

    2016-04-30

    Warfare, Naval Sea Systems Command Acquisition Cycle Time : Defining the Problem David Tate, Institute for Defense Analyses Schedule Analytics Jennifer...research was comprised of the following high- level steps :  Identify and review primary data sources 1...research. However, detailed reviews of the OMB IT Dashboard data revealed that schedule data is highly aggregated. Program start date and program end date

  14. A market approach to decentralized control of a manufacturing cell

    International Nuclear Information System (INIS)

    Shao Xinyu; Ma Li; Guan Zailin

    2009-01-01

    Based on a fictitious market model, a decentralized approach is presented for the workstation scheduling in a CNC workshop. A multi-agent framework is proposed, where job agents and resource agents act as buyers and sellers of resource in the virtual market. With cost and benefit calculation of these agent activities, which reflects the state of the production environment, various, and often conflicting goals and interests influencing the scheduling process in practice can be balanced through a unified instrument offered by the markets. The paper first introduces a heuristic procedure that makes scheduling reservations in a periodic manner. A multi-agent framework is then introduced, in which job agents and resource agents seek appropriate job-workstation matches through bidding in the construction of the above periodic 'micro-schedules'. A pricing policy is proposed for the price-directed coordination of agent activities in this. Simulation results demonstrate the feasibility of the proposed approach and give some insights on the effects of some decision making parameters. Future work will be focused on the designing of some more sophisticated coordination mechanism and its deployment.

  15. A market approach to decentralized control of a manufacturing cell

    Energy Technology Data Exchange (ETDEWEB)

    Shao Xinyu [State Key Lab of Digital Manufacturing and Equipments, Huazhong University of Science and Technology, Wuhan 430074, Hubei (China)], E-mail: shaoxy@hust.edu.cn; Ma Li [State Key Lab of Digital Manufacturing and Equipments, Huazhong University of Science and Technology, Wuhan 430074, Hubei (China)], E-mail: china_ml@163.com; Guan Zailin [State Key Lab of Digital Manufacturing and Equipments, Huazhong University of Science and Technology, Wuhan 430074, Hubei (China)], E-mail: zlguan@hust.edu.cn

    2009-03-15

    Based on a fictitious market model, a decentralized approach is presented for the workstation scheduling in a CNC workshop. A multi-agent framework is proposed, where job agents and resource agents act as buyers and sellers of resource in the virtual market. With cost and benefit calculation of these agent activities, which reflects the state of the production environment, various, and often conflicting goals and interests influencing the scheduling process in practice can be balanced through a unified instrument offered by the markets. The paper first introduces a heuristic procedure that makes scheduling reservations in a periodic manner. A multi-agent framework is then introduced, in which job agents and resource agents seek appropriate job-workstation matches through bidding in the construction of the above periodic 'micro-schedules'. A pricing policy is proposed for the price-directed coordination of agent activities in this. Simulation results demonstrate the feasibility of the proposed approach and give some insights on the effects of some decision making parameters. Future work will be focused on the designing of some more sophisticated coordination mechanism and its deployment.

  16. A framework for service enterprise workflow simulation with multi-agents cooperation

    Science.gov (United States)

    Tan, Wenan; Xu, Wei; Yang, Fujun; Xu, Lida; Jiang, Chuanqun

    2013-11-01

    Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study.

  17. Climate Data Analytics Workflow Management

    Science.gov (United States)

    Zhang, J.; Lee, S.; Pan, L.; Mattmann, C. A.; Lee, T. J.

    2016-12-01

    In this project we aim to pave a novel path to create a sustainable building block toward Earth science big data analytics and knowledge sharing. Closely studying how Earth scientists conduct data analytics research in their daily work, we have developed a provenance model to record their activities, and to develop a technology to automatically generate workflows for scientists from the provenance. On top of it, we have built the prototype of a data-centric provenance repository, and establish a PDSW (People, Data, Service, Workflow) knowledge network to support workflow recommendation. To ensure the scalability and performance of the expected recommendation system, we have leveraged the Apache OODT system technology. The community-approved, metrics-based performance evaluation web-service will allow a user to select a metric from the list of several community-approved metrics and to evaluate model performance using the metric as well as the reference dataset. This service will facilitate the use of reference datasets that are generated in support of the model-data intercomparison projects such as Obs4MIPs and Ana4MIPs. The data-centric repository infrastructure will allow us to catch richer provenance to further facilitate knowledge sharing and scientific collaboration in the Earth science community. This project is part of Apache incubator CMDA project.

  18. Bitcoin as a decentralized currency

    Directory of Open Access Journals (Sweden)

    Dinić Vladimir

    2014-01-01

    Full Text Available Bitcoin is the first decentralized peer-to-peer crypto-currency founded in 2009. Its main specificity is the fact that there is no issuer of this currency. On the other hand, the supply of this currency is software-programmed and limited. Among other things, its main features are relatively secure payments, low transaction costs, anonymity, inability of counterfeiting, irreversibility of transactions, but also extremely unstable exchange rate. Despite many advantages, the use of this currency is subject of numerous discussions, as this currency offers the possibility of performing various abuses and criminal activities. The future of this and other currencies in this regard depends on both security and privacy of these currencies, and legal regulation of such payments.

  19. Data analysis with the DIANA meta-scheduling approach

    International Nuclear Information System (INIS)

    Anjum, A; McClatchey, R; Willers, I

    2008-01-01

    The concepts, design and evaluation of the Data Intensive and Network Aware (DIANA) meta-scheduling approach for solving the challenges of data analysis being faced by CERN experiments are discussed in this paper. Our results suggest that data analysis can be made robust by employing fault tolerant and decentralized meta-scheduling algorithms supported in our DIANA meta-scheduler. The DIANA meta-scheduler supports data intensive bulk scheduling, is network aware and follows a policy centric meta-scheduling. In this paper, we demonstrate that a decentralized and dynamic meta-scheduling approach is an effective strategy to cope with increasing numbers of users, jobs and datasets. We present 'quality of service' related statistics for physics analysis through the application of a policy centric fair-share scheduling model. The DIANA meta-schedulers create a peer-to-peer hierarchy of schedulers to accomplish resource management that changes with evolving loads and is dynamic and adapts to the volatile nature of the resources

  20. An Organizational and Qualitative Approach to Improving University Course Scheduling

    Science.gov (United States)

    Hill, Duncan L.

    2010-01-01

    Focusing on the current timetabling process at the University of Toronto Mississauga (UTM), I apply David Wesson's theoretical framework in order to understand (1) how increasing enrollment interacts with a decentralized timetabling process to limit the flexibility of course schedules and (2) the resultant impact on educational quality. I then…

  1. Federalism and Decentralization of Education in Argentina. Unintended Consequences of Decentralization of Expenditures in a Federal Country.

    Science.gov (United States)

    Falleti, Tulia G.

    By analyzing the process of decentralization of education in Argentina, this paper complements the existing literature on decentralization and federalism in two ways: (1) it studies the impact of federal institutions on the origins and evolution of decentralization; and (2) it analyzes a case of decentralization of education that, in a way not…

  2. Decentralization and financial autonomy: a challenge for local public authorities in the Republic of Moldova

    Directory of Open Access Journals (Sweden)

    Tatiana MANOLE

    2017-09-01

    Full Text Available This article reflects the decentralization process currently taking place in the Republic of Moldova. The purpose of the research is to acquaint readers with the fundamental concept of decentralization, with the areas of administrative decentralization, with the forms of manifestation of financial decentralization: fiscal decentralization and budget decentralization. The priorities of the decentralization process are identified.

  3. Computing for Decentralized Systems (lecture 2)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    With the rise of Bitcoin, Ethereum, and other cryptocurrencies it is becoming apparent the paradigm shift towards decentralized computing. Computer engineers will need to understand this shift when developing systems in the coming years. Transferring value over the Internet is just one of the first working use cases of decentralized systems, but it is expected they will be used for a number of different services such as general purpose computing, data storage, or even new forms of governance. Decentralized systems, however, pose a series of challenges that cannot be addressed with traditional approaches in computing. Not having a central authority implies truth must be agreed upon rather than simply trusted and, so, consensus protocols, cryptographic data structures like the blockchain, and incentive models like mining rewards become critical for the correct behavior of decentralized system. This series of lectures will be a fast track to introduce these fundamental concepts through working examples and pra...

  4. Computing for Decentralized Systems (lecture 1)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    With the rise of Bitcoin, Ethereum, and other cryptocurrencies it is becoming apparent the paradigm shift towards decentralized computing. Computer engineers will need to understand this shift when developing systems in the coming years. Transferring value over the Internet is just one of the first working use cases of decentralized systems, but it is expected they will be used for a number of different services such as general purpose computing, data storage, or even new forms of governance. Decentralized systems, however, pose a series of challenges that cannot be addressed with traditional approaches in computing. Not having a central authority implies truth must be agreed upon rather than simply trusted and, so, consensus protocols, cryptographic data structures like the blockchain, and incentive models like mining rewards become critical for the correct behavior of decentralized system. This series of lectures will be a fast track to introduce these fundamental concepts through working examples and pra...

  5. The effects of fiscal decentralization in Albania

    Directory of Open Access Journals (Sweden)

    Dr.Sc. Blerta Dragusha

    2012-06-01

    Full Text Available “Basically decentralization is a democratic reform which seeks to transfer the political, administrative, financial and planning authority from central to local government. It seeks to develop civic participation, empowerment of local people in decision making process and to promote accountability and reliability: To achieve efficiency and effectiveness in the collection and management of resources and service delivery”1 The interest and curiosity of knowing how our country is doing in this process, still unfinished, served as a motivation forme to treat this topic: fiscal decentralization as a process of giving 'power' to local governments, not only in terms of rights deriving from this process but also on the responsibilities that come with it. Which are the stages before and after decentralization, and how has it affected the process in several key indicators? Is decentralization a good process only, or can any of its effects be seen as an disadvantage?

  6. Decentralization and the local development state

    DEFF Research Database (Denmark)

    Emmenegger, Rony Hugo

    2016-01-01

    This article explores the politics of decentralization and state-peasant encounters in rural Oromiya, Ethiopia. Breaking with a centralized past, the incumbent government of the Ethiopian People's Revolutionary Democratic Front (EPRDF) committed itself to a decentralization policy in the early 1990......s and has since then created a number of new sites for state-citizen interactions. In the context of electoral authoritarianism, however, decentralization has been interpreted as a means for the expansion of the party-state at the grass-roots level. Against this backdrop, this article attempts...... between the 2005 and 2010 elections. Based on ethnographic field research, the empirical case presented discloses that decentralization and state-led development serve the expansion of state power into rural areas, but that state authority is simultaneously constituted and undermined in the course...

  7. Improved compliance by BPM-driven workflow automation.

    Science.gov (United States)

    Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin

    2014-12-01

    Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.

  8. Workflows for Full Waveform Inversions

    Science.gov (United States)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  9. The Two Edge Knife of Decentralization

    OpenAIRE

    Umam, Ahmad Khoirul

    2011-01-01

    A centralistic government model has become a trend in a number of developing countries, in which the ideosycretic aspect becomes pivotal key in the policy making. The situation constitutes authoritarianism, cronyism, and corruption. To break the impasse, the decentralized system is proposed to make people closer to the public policy making. Decentralization is also convinced to be the solution to create a good governance. But a number of facts in the developing countries demonstrates that dec...

  10. Making decentralization work for women in Uganda

    OpenAIRE

    Lakwo, A.

    2009-01-01

    This book is about engendering local governance. It explores the euphoria with which Uganda's decentralization policy took centre stage as a sufficient driver to engender local development responsiveness and accountability. Using a case study of AFARD in Nebbi district, it shows first that decentralized governance is gendered and technocratic as grassroots women's effective participation is lacking. Second, it shows that the insertion of women in local governance is merely a symbolic politica...

  11. Decentralized energy supply and electricity market structures

    OpenAIRE

    Weber, Christoph; Vogel, Philip

    2005-01-01

    Small decentralized power generation units (DG) are politically promoted because of their potential to reduce GHG-emissions and the existing dependency on fossil fuels. A long term goal of this promotion should be the creation of a level playing field for DG and conventional power generation. Due to the impact of DG on the electricity grid infrastructure, future regulation should consider the costs and benefits of the integration of decentralized energy generation units. Without an adequate c...

  12. Decentralized flight trajectory planning of multiple aircraft

    OpenAIRE

    Yokoyama, Nobuhiro; 横山 信宏

    2008-01-01

    Conventional decentralized algorithms for optimal trajectory planning tend to require prohibitive computational time as the number of aircraft increases. To overcome this drawback, this paper proposes a novel decentralized trajectory planning algorithm adopting a constraints decoupling approach for parallel optimization. The constraints decoupling approach is formulated as the path constraints of the real-time trajectory optimization problem based on nonlinear programming. Due to the parallel...

  13. Decentralized electricity production. v. 1 and 2

    International Nuclear Information System (INIS)

    1991-01-01

    The first part of the symposium is concerned with market analysis, case studies and prospectives for the decentralized production of electricity in France: cogeneration, heat networks, municipal waste incineration, etc. Financing systems and microeconomical analysis are presented. The second part is devoted to macroeconomical outlooks (France and Europe mainly) on decentralized electricity production (cogeneration, small-scale hydroelectric power plants), to other countries experience (PV systems connected to the grid, cogeneration, etc.) and to price contracts and regulations

  14. Centralized, Decentralized, and Hybrid Purchasing Organizations

    DEFF Research Database (Denmark)

    Bals, Lydia; Turkulainen, Virpi

    This paper addresses one of the focal issues in purchasing and supply management – global sourcing – from an organizational design perspective. In particular, we elaborate the traditional classification of global sourcing organization designs into centralized, decentralized, and hybrid models. We...... organization we can identify organization designs beyond the classical centralization-decentralization continuum. We also provide explanations for the observed organization design at GCC. The study contributes to research on purchasing and supply management as well as research on organization design....

  15. Automation of Flexible Migration Workflows

    Directory of Open Access Journals (Sweden)

    Dirk von Suchodoletz

    2011-03-01

    Full Text Available Many digital preservation scenarios are based on the migration strategy, which itself is heavily tool-dependent. For popular, well-defined and often open file formats – e.g., digital images, such as PNG, GIF, JPEG – a wide range of tools exist. Migration workflows become more difficult with proprietary formats, as used by the several text processing applications becoming available in the last two decades. If a certain file format can not be rendered with actual software, emulation of the original environment remains a valid option. For instance, with the original Lotus AmiPro or Word Perfect, it is not a problem to save an object of this type in ASCII text or Rich Text Format. In specific environments, it is even possible to send the file to a virtual printer, thereby producing a PDF as a migration output. Such manual migration tasks typically involve human interaction, which may be feasible for a small number of objects, but not for larger batches of files.We propose a novel approach using a software-operated VNC abstraction layer in order to replace humans with machine interaction. Emulators or virtualization tools equipped with a VNC interface are very well suited for this approach. But screen, keyboard and mouse interaction is just part of the setup. Furthermore, digital objects need to be transferred into the original environment in order to be extracted after processing. Nevertheless, the complexity of the new generation of migration services is quickly rising; a preservation workflow is now comprised not only of the migration tool itself, but of a complete software and virtual hardware stack with recorded workflows linked to every supported migration scenario. Thus the requirements of OAIS management must include proper software archiving, emulator selection, system image and recording handling. The concept of view-paths could help either to automatically determine the proper pre-configured virtual environment or to set up system

  16. Partially Decentralized Control Architectures for Satellite Formations

    Science.gov (United States)

    Carpenter, J. Russell; Bauer, Frank H.

    2002-01-01

    In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.

  17. A Scheduling Algorithm for the Distributed Student Registration System in Transaction-Intensive Environment

    Science.gov (United States)

    Li, Wenhao

    2011-01-01

    Distributed workflow technology has been widely used in modern education and e-business systems. Distributed web applications have shown cross-domain and cooperative characteristics to meet the need of current distributed workflow applications. In this paper, the author proposes a dynamic and adaptive scheduling algorithm PCSA (Pre-Calculated…

  18. Pro WF Windows Workflow in NET 40

    CERN Document Server

    Bukovics, Bruce

    2010-01-01

    Windows Workflow Foundation (WF) is a revolutionary part of the .NET 4 Framework that allows you to orchestrate human and system interactions as a series of workflows that can be easily mapped, analyzed, adjusted, and implemented. As business problems become more complex, the need for workflow-based solutions has never been more evident. WF provides a simple and consistent way to model and implement complex problems. As a developer, you focus on developing the business logic for individual workflow tasks. The runtime handles the execution of those tasks after they have been composed into a wor

  19. Multidetector-row CT: economics and workflow

    International Nuclear Information System (INIS)

    Pottala, K.M.; Kalra, M.K.; Saini, S.; Ouellette, K.; Sahani, D.; Thrall, J.H.

    2005-01-01

    With rapid evolution of multidetector-row CT (MDCT) technology and applications, several factors such ad technology upgrade and turf battles for sharing cost and profitability affect MDCT workflow and economics. MDCT workflow optimization can enhance productivity and reduce unit costs as well as increase profitability, in spite of decrease in reimbursement rates. Strategies for workflow management include standardization, automation, and constant assessment of various steps involved in MDCT operations. In this review article, we describe issues related to MDCT economics and workflow. (orig.)

  20. Decentralized Consistent Updates in SDN

    KAUST Repository

    Nguyen, Thanh Dang

    2017-04-10

    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and blackholes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.

  1. Integration of decentralized electricity production

    International Nuclear Information System (INIS)

    Tomekova, A.

    2004-01-01

    The project SustelNet also deals with the possibilities of future development of DG sources. Within the project frame a quite general concept of the so-called 'equal field' for centralized and decentralized production was chosen for the better integration of DG. Its aim was to the achieve demanded level on the market in the future term (by 2020). Looking at the problem in the wider context means, that both forms of the production should be admitted on the market on the same conditions. The result of this project is a regulatory map, which actually serves as a definite regulatory strategy for more effective employment of DG sources. On the basis of the national regulatory strategies a proposition of regulatory map for EU will be launched, including some recommendations for European Commission. A few expert papers (scenario of proceeding, benchmarking, economical tools and criteria) are also output of this project. Five member states of the EU and four entering countries have been involved in this project. The final results of this procedure will be presented from April 2004 on international and national conferences, seminaries, or by means of other ways of publicity

  2. Decentralized and Modular Electrical Architecture

    Science.gov (United States)

    Elisabelar, Christian; Lebaratoux, Laurence

    2014-08-01

    This paper presents the studies made on the definition and design of a decentralized and modular electrical architecture that can be used for power distribution, active thermal control (ATC), standard inputs-outputs electrical interfaces.Traditionally implemented inside central unit like OBC or RTU, these interfaces can be dispatched in the satellite by using MicroRTU.CNES propose a similar approach of MicroRTU. The system is based on a bus called BRIO (Bus Réparti des IO), which is composed, by a power bus and a RS485 digital bus. BRIO architecture is made with several miniature terminals called BTCU (BRIO Terminal Control Unit) distributed in the spacecraft.The challenge was to design and develop the BTCU with very little volume, low consumption and low cost. The standard BTCU models are developed and qualified with a configuration dedicated to ATC, while the first flight model will fly on MICROSCOPE for PYRO actuations and analogue acquisitions. The design of the BTCU is made in order to be easily adaptable for all type of electric interface needs.Extension of this concept is envisaged for power conditioning and distribution unit, and a Modular PCDU based on BRIO concept is proposed.

  3. Towards an Intelligent Workflow Designer based on the Reuse of Workflow Patterns

    NARCIS (Netherlands)

    Iochpe, Cirano; Chiao, Carolina; Hess, Guillermo; Nascimento, Gleison; Thom, Lucinéia; Reichert, Manfred

    2007-01-01

    In order to perform process-aware information systems we need sophisticated methods and concepts for designing and modeling processes. Recently, research on workflow patterns has emerged in order to increase the reuse of recurring workflow structures. However, current workflow modeling tools do not

  4. ADRES : autonomous decentralized regenerative energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Brauner, G.; Einfalt, A.; Leitinger, C.; Tiefgraber, D. [Vienna Univ. of Technology (Austria)

    2007-07-01

    The autonomous decentralized regenerative energy systems (ADRES) research project demonstrates that decentralized network independent microgrids are the target power systems of the future. This paper presented a typical structure of a microgrid, demonstrating that all types of generation available can be integrated, from wind and small hydro to photovoltaic, fuel cell, biomass or biogas operated stirling motors and micro turbines. In grid connected operation the balancing energy and reactive power for voltage control will come from the public grid. If there is no interconnection to a superior grid, it will form an autonomous micro grid. In order to reduce peak power demand and base energy, autonomous microgrid technology requires highly efficient appliances. Otherwise large collector design, high storage and balancing generation capacities would be necessary, which would increase costs. End-use energy efficiency was discussed with reference to demand side management (DSM) strategies that match energy demand with actual supply in order to minimize the storage size needed. This paper also discussed network controls that comprise active and reactive power. Decentralized robust algorithms were investigated with reference to black-start ability and congestion management features. It was concluded that the trend to develop small decentralized grids in parallel to existing large systems will improve security of supply and reduce greenhouse gas emissions. Decentralized grids will also increase energy efficiency because regenerative energy will be used where it is collected in the form of electricity and heat, thus avoiding transport and the extension of transmission lines. Decentralized energy technology is now becoming more economic by efficient and economic mass production of components. Although decentralized energy technology requires energy automation, computer intelligence is becoming increasingly cost efficient. 2 refs., 4 figs.

  5. Abstract flexibility description for virtual power plant scheduling

    OpenAIRE

    Fröhling, Judith

    2017-01-01

    In the ongoing paradigm shift of the energy market from big power plants to more and more small and decentralized power plants, virtual power plants (VPPs) play an important role. VPPs bundle the capacities of the small and decentralized resources (DER). Planing of VPP operation, that is also called scheduling, relies on the flexibilities of controllable DER in the VPP, e.g., combined heat and power plants (CHPs), heat pumps and batteries. The aim of this thesis is the development of an abstr...

  6. A customizable, scalable scheduling and reporting system.

    Science.gov (United States)

    Wood, Jody L; Whitman, Beverly J; Mackley, Lisa A; Armstrong, Robert; Shotto, Robert T

    2014-06-01

    Scheduling is essential for running a facility smoothly and for summarizing activities in use reports. The Penn State Hershey Clinical Simulation Center has developed a scheduling interface that uses off-the-shelf components, with customizations that adapt to each institution's data collection and reporting needs. The system is designed using programs within the Microsoft Office 2010 suite. Outlook provides the scheduling component, while the reporting is performed using Access or Excel. An account with a calendar is created for the main schedule, with separate resource accounts created for each room within the center. The Outlook appointment form's 2 default tabs are used, in addition to a customized third tab. The data are then copied from the calendar into either a database table or a spreadsheet, where the reports are generated.Incorporating this system into an institution-wide structure allows integration of personnel lists and potentially enables all users to check the schedule from their desktop. Outlook also has a Web-based application for viewing the basic schedule from outside the institution, although customized data cannot be accessed. The scheduling and reporting functions have been used for a year at the Penn State Hershey Clinical Simulation Center. The schedule has increased workflow efficiency, improved the quality of recorded information, and provided more accurate reporting. The Penn State Hershey Clinical Simulation Center's scheduling and reporting system can be adapted easily to most simulation centers and can expand and change to meet future growth with little or no expense to the center.

  7. Workflow patterns the definitive guide

    CERN Document Server

    Russell, Nick; ter Hofstede, Arthur H M

    2016-01-01

    The study of business processes has emerged as a highly effective approach to coordinating an organization's complex service- and knowledge-based activities. The growing field of business process management (BPM) focuses on methods and tools for designing, enacting, and analyzing business processes. This volume offers a definitive guide to the use of patterns, which synthesize the wide range of approaches to modeling business processes. It provides a unique and comprehensive introduction to the well-known workflow patterns collection -- recurrent, generic constructs describing common business process modeling and execution scenarios, presented in the form of problem-solution dialectics. The underlying principles of the patterns approach ensure that they are independent of any specific enabling technology, representational formalism, or modeling approach, and thus broadly applicable across the business process modeling and business process technology domains. The authors, drawing on extensive research done by...

  8. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...

  9. Contracts for Cross-Organizational Workflow Management

    NARCIS (Netherlands)

    Koetsier, M.J.; Grefen, P.W.P.J.; Vonk, J.

    1999-01-01

    Nowadays, many organizations form dynamic partnerships to deal effectively with market requirements. As companies use automated workflow systems to control their processes, a way of linking workflow processes in different organizations is useful in turning the co-operating companies into a seamless

  10. Verifying generalized soundness for workflow nets

    NARCIS (Netherlands)

    Hee, van K.M.; Oanea, O.I.; Sidorova, N.; Voorhoeve, M.; Virbitskaite, I.; Voronkov, A.

    2007-01-01

    We improve the decision procedure from [10] for the problem of generalized soundness of workflow nets. A workflow net is generalized sound iff every marking reachable from an initial marking with k tokens on the initial place terminates properly, i.e. it can reach a marking with k tokens on the

  11. Implementing bioinformatic workflows within the bioextract server

    Science.gov (United States)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  12. Workflow Fault Tree Generation Through Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2014-01-01

    We present a framework for the automated generation of fault trees from models of realworld process workflows, expressed in a formalised subset of the popular Business Process Modelling and Notation (BPMN) language. To capture uncertainty and unreliability in workflows, we extend this formalism...

  13. Workflow Patterns for Business Process Modeling

    NARCIS (Netherlands)

    Thom, Lucineia Heloisa; Lochpe, Cirano; Reichert, M.U.

    For its reuse advantages, workflow patterns (e.g., control flow patterns, data patterns, resource patterns) are increasingly attracting the interest of both researchers and vendors. Frequently, business process or workflow models can be assembeled out of a set of recurrent process fragments (or

  14. Sustainability evaluation of decentralized electricity generation

    International Nuclear Information System (INIS)

    Karger, Cornelia R.; Hennings, Wilfried

    2009-01-01

    Decentralized power generation is gaining significance in liberalized electricity markets. An increasing decentralization of power supply is expected to make a particular contribution to climate protection. This article investigates the advantages and disadvantages of decentralized electricity generation according to the overall concept of sustainable development. On the basis of a hierarchically structured set of sustainability criteria, four future scenarios for Germany are assessed, all of which describe different concepts of electricity supply in the context of the corresponding social and economic developments. The scenarios are developed in an explorative way according to the scenario method and the sustainability criteria are established by a discursive method with societal actors. The evaluation is carried out by scientific experts. By applying an expanded analytic hierarchy process (AHP), a multicriteria evaluation is conducted that identifies dissent among the experts. The results demonstrate that decentralized electricity generation can contribute to climate protection. The extent to which it simultaneously guarantees security of supply is still a matter of controversy. However, experts agree that technical and economic boundary conditions are of major importance in this field. In the final section, the article discusses the method employed here as well as implications for future decentralized energy supply. (author)

  15. A Formal Framework for Workflow Analysis

    Science.gov (United States)

    Cravo, Glória

    2010-09-01

    In this paper we provide a new formal framework to model and analyse workflows. A workflow is the formal definition of a business process that consists in the execution of tasks in order to achieve a certain objective. In our work we describe a workflow as a graph whose vertices represent tasks and the arcs are associated to workflow transitions. Each task has associated an input/output logic operator. This logic operator can be the logical AND (•), the OR (⊗), or the XOR -exclusive-or—(⊕). Moreover, we introduce algebraic concepts in order to completely describe completely the structure of workflows. We also introduce the concept of logical termination. Finally, we provide a necessary and sufficient condition for this property to hold.

  16. Using Mobile Agents to Implement Workflow System

    Institute of Scientific and Technical Information of China (English)

    LI Jie; LIU Xian-xing; GUO Zheng-wei

    2004-01-01

    Current workflow management systems usually adopt the existing technologies such as TCP/IP-based Web technologies and CORBA as well to fulfill the bottom communications.Very often it has been considered only from a theoretical point of view, mainly for the lack of concrete possibilities to execute with elasticity.MAT (Mobile Agent Technology) represents a very attractive approach to the distributed control of computer networks and a valid alternative to the implementation of strategies for workflow system.This paper mainly focuses on improving the performance of workflow system by using MAT.Firstly, the performances of workflow systems based on both CORBA and mobile agent are summarized and analyzed; Secondly, the performance contrast is presented by introducing the mathematic model of each kind of data interaction process respectively.Last, a mobile agent-based workflow system named MAWMS is presented and described in detail.

  17. Dynamic Reusable Workflows for Ocean Science

    Directory of Open Access Journals (Sweden)

    Richard P. Signell

    2016-10-01

    Full Text Available Digital catalogs of ocean data have been available for decades, but advances in standardized services and software for catalog searches and data access now make it possible to create catalog-driven workflows that automate—end-to-end—data search, analysis, and visualization of data from multiple distributed sources. Further, these workflows may be shared, reused, and adapted with ease. Here we describe a workflow developed within the US Integrated Ocean Observing System (IOOS which automates the skill assessment of water temperature forecasts from multiple ocean forecast models, allowing improved forecast products to be delivered for an open water swim event. A series of Jupyter Notebooks are used to capture and document the end-to-end workflow using a collection of Python tools that facilitate working with standardized catalog and data services. The workflow first searches a catalog of metadata using the Open Geospatial Consortium (OGC Catalog Service for the Web (CSW, then accesses data service endpoints found in the metadata records using the OGC Sensor Observation Service (SOS for in situ sensor data and OPeNDAP services for remotely-sensed and model data. Skill metrics are computed and time series comparisons of forecast model and observed data are displayed interactively, leveraging the capabilities of modern web browsers. The resulting workflow not only solves a challenging specific problem, but highlights the benefits of dynamic, reusable workflows in general. These workflows adapt as new data enter the data system, facilitate reproducible science, provide templates from which new scientific workflows can be developed, and encourage data providers to use standardized services. As applied to the ocean swim event, the workflow exposed problems with two of the ocean forecast products which led to improved regional forecasts once errors were corrected. While the example is specific, the approach is general, and we hope to see increased

  18. Dynamic reusable workflows for ocean science

    Science.gov (United States)

    Signell, Richard; Fernandez, Filipe; Wilcox, Kyle

    2016-01-01

    Digital catalogs of ocean data have been available for decades, but advances in standardized services and software for catalog search and data access make it now possible to create catalog-driven workflows that automate — end-to-end — data search, analysis and visualization of data from multiple distributed sources. Further, these workflows may be shared, reused and adapted with ease. Here we describe a workflow developed within the US Integrated Ocean Observing System (IOOS) which automates the skill-assessment of water temperature forecasts from multiple ocean forecast models, allowing improved forecast products to be delivered for an open water swim event. A series of Jupyter Notebooks are used to capture and document the end-to-end workflow using a collection of Python tools that facilitate working with standardized catalog and data services. The workflow first searches a catalog of metadata using the Open Geospatial Consortium (OGC) Catalog Service for the Web (CSW), then accesses data service endpoints found in the metadata records using the OGC Sensor Observation Service (SOS) for in situ sensor data and OPeNDAP services for remotely-sensed and model data. Skill metrics are computed and time series comparisons of forecast model and observed data are displayed interactively, leveraging the capabilities of modern web browsers. The resulting workflow not only solves a challenging specific problem, but highlights the benefits of dynamic, reusable workflows in general. These workflows adapt as new data enters the data system, facilitate reproducible science, provide templates from which new scientific workflows can be developed, and encourage data providers to use standardized services. As applied to the ocean swim event, the workflow exposed problems with two of the ocean forecast products which led to improved regional forecasts once errors were corrected. While the example is specific, the approach is general, and we hope to see increased use of dynamic

  19. Distribution of decentralized renewable energy resources

    International Nuclear Information System (INIS)

    Bal, J.L.; Benque, J.P.

    1996-01-01

    The existence of a great number of inhabitants without electricity, living in areas of low population density, with modest energy requirements and low income provides a major potential market for decentralized renewable energy sources. Ademe and EDF in 1993 made two agreements concerning the development of Renewable Energy Sources. The first aims at promoting their decentralized use in France in pertinent cases. The second agreement concerns other countries and has two ambitions: facilitate short-term developments and produce in the longer term a standardised proposal for decentralized energy production using Renewable Energy Sources to a considerable extent. These ideas are explained, and the principles behind the implementation of both Ademe-EDF agreements as well as their future prospects are described. (R.P.)

  20. Towards Automatic Decentralized Control Structure Selection

    DEFF Research Database (Denmark)

    for decentralized control is determined automatically, and the resulting decentralized control structure is automatically tuned using standard techniques. Dynamic simulation of the resulting process system gives immediate feedback to the process design engineer regarding practical operability of the process......A subtask in integration of design and control of chemical processes is the selection of a control structure. Automating the selection of the control structure enables sequential integration of process and controld esign. As soon as the process is specified or computed, a structure....... The control structure selection problem is formulated as a special MILP employing cost coefficients which are computed using Parseval's theorem combined with RGA and IMC concepts. This approach enables selection and tuning of large-scale plant-wide decentralized controllers through efficient combination...

  1. Towards Automatic Decentralized Control Structure Selection

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2000-01-01

    for decentralized control is determined automatically, and the resulting decentralized control structure is automatically tuned using standard techniques. Dynamic simulation of the resulting process system gives immediate feedback to the process design engineer regarding practical operability of the process......A subtask in integration of design and control of chemical processes is the selection of a control structure. Automating the selection of the control structure enables sequential integration of process and control design. As soon as the process is specified or computed, a structure....... The control structure selection problem is formulated as a special MILP employing cost coefficients which are computed using Parseval's theorem combined with RGA and IMC concepts. This approach enables selection and tuning of large-scale plant-wide decentralized controllers through efficient combination...

  2. Decentralization, Local Rights and the Construction of Women's ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Decentralization, Local Rights and the Construction of Women's Citizenship : a Comparative Study in Kenya, Tanzania and Uganda - Phase II. Kenya, Tanzania and Uganda have adopted new land laws, policies and institutional arrangements to accommodate decentralization of land administration and management.

  3. ADVANCED APPROACH TO PRODUCTION WORKFLOW COMPOSITION ON ENGINEERING KNOWLEDGE PORTALS

    OpenAIRE

    Novogrudska, Rina; Kot, Tatyana; Globa, Larisa; Schill, Alexander

    2016-01-01

    Background. In the environment of engineering knowledge portals great amount of partial workflows is concentrated. Such workflows are composed into general workflow aiming to perform real complex production task. Characteristics of partial workflows and general workflow structure are not studied enough, that affects the impossibility of general production workflowdynamic composition.Objective. Creating an approach to the general production workflow dynamic composition based on the partial wor...

  4. Centralized vs. decentralized child mental health services.

    Science.gov (United States)

    Adams, M S

    1977-09-01

    One of the basic tenets of the Community Mental Health Center movement is that services should be provided in the consumers' community. Various centers across the country have attempted to do this in either a centralized or decentralized fashion. Historically, most health services have been provided centrally, a good example being the traditional general hospital with its centralized medical services. Over the years, some of these services have become decentralized to take the form of local health centers, health maintenance organizations, community clinics, etc, and now various large mental health centers are also being broken down into smaller community units. An example of each type of mental health facility is delineated here.

  5. Decentralizing decision making in modularization strategies

    DEFF Research Database (Denmark)

    Israelsen, Poul; Jørgensen, Brian

    2011-01-01

    which distorts the economic effects of modularization at the level of the individual product. This has the implication that decisions on modularization can only be made by top management if decision authority and relevant information are to be aligned. To overcome this problem, we suggest a solution...... that aligns the descriptions of the economic consequences of modularization at the project and portfolio level which makes it possible to decentralize decision making while making sure that local goals are congruent with the global ones in order to avoid suboptimal behaviour. Keywords: Modularization......; Accounting; Cost allocation; Decision rule; Decentralization...

  6. Near optimal decentralized H_inf control

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    It is shown that foir a class of decentralized control problems there does not exist a sequence of controllers of bounded order which obtains near optimal control. Neither does there exist an infinity dimentional optimal controller. Using the insight of the line of proof of these results, a heuri......It is shown that foir a class of decentralized control problems there does not exist a sequence of controllers of bounded order which obtains near optimal control. Neither does there exist an infinity dimentional optimal controller. Using the insight of the line of proof of these results...

  7. Decentralization and Economic Growth per capita in Europe

    NARCIS (Netherlands)

    Crucq, Pieter; Hemminga, Hendrik-Jan

    2007-01-01

    In this paper the relationship between decentralization and economic growth is investigated. The focus is on decentralization from the national government to the highest substate level in a country, which we define as regional decentralization. Section 2 discusses the different dimensions of

  8. Development of a pharmacy resident rotation to expand decentralized clinical pharmacy services.

    Science.gov (United States)

    Hill, John D; Williams, Jonathan P; Barnes, Julie F; Greenlee, Katie M; Cardiology, Bcps-Aq; Leonard, Mandy C

    2017-07-15

    The development of a pharmacy resident rotation to expand decentralized clinical pharmacy services is described. In an effort to align with the initiatives proposed within the ASHP Practice Advancement Initiative, the department of pharmacy at Cleveland Clinic, a 1,400-bed academic, tertiary acute care medical center in Cleveland, Ohio, established a goal to provide decentralized clinical pharmacy services for 100% of patient care units within the hospital. Patient care units that previously had no decentralized pharmacy services were evaluated to identify opportunities for expansion. Metrics analyzed included number of medication orders verified per hour, number of pharmacy dosing consultations, and number of patient discharge counseling sessions. A pilot study was conducted to assess the feasibility of this service and potential resident learning opportunities. A learning experience description was drafted, and feedback was solicited regarding the development of educational components utilized throughout the rotation. Pharmacists who were providing services to similar patient populations were identified to serve as preceptors. Staff pharmacists were deployed to previously uncovered patient care units, with pharmacy residents providing decentralized services on previously covered areas. A rotating preceptor schedule was developed based on geographic proximity and clinical expertise. An initial postimplementation assessment of this resident-driven service revealed that pharmacy residents provided a comparable level of pharmacy services to that of staff pharmacists. Feedback collected from nurses, physicians, and pharmacy staff also supported residents' ability to operate sufficiently in this role to optimize patient care. A learning experience developed for pharmacy residents in a large medical center enabled the expansion of decentralized clinical services without requiring additional pharmacist full-time equivalents. Copyright © 2017 by the American Society of

  9. Scheduling the scheduling task : a time management perspective on scheduling

    NARCIS (Netherlands)

    Larco Martinelli, J.A.; Wiers, V.C.S.; Fransoo, J.C.

    2013-01-01

    Time is the most critical resource at the disposal of schedulers. Hence, an adequate management of time from the schedulers may impact positively on the scheduler’s productivity and responsiveness to uncertain scheduling environments. This paper presents a field study of how schedulers make use of

  10. Workflow Management in CLARIN-DK

    DEFF Research Database (Denmark)

    Jongejan, Bart

    2013-01-01

    The CLARIN-DK infrastructure is not only a repository of resources, but also a place where users can analyse, annotate, reformat and potentially even translate resources, using tools that are integrated in the infrastructure as web services. In many cases a single tool does not produce the desired...... with the features that describe her goal, because the workflow manager not only executes chains of tools in a workflow, but also takes care of autonomously devising workflows that serve the user’s intention, given the tools that currently are integrated in the infrastructure as web services. To do this...

  11. The Diabetic Retinopathy Screening Workflow

    Science.gov (United States)

    Bolster, Nigel M.; Giardini, Mario E.; Bastawrous, Andrew

    2015-01-01

    Complications of diabetes mellitus, namely diabetic retinopathy and diabetic maculopathy, are the leading cause of blindness in working aged people. Sufferers can avoid blindness if identified early via retinal imaging. Systematic screening of the diabetic population has been shown to greatly reduce the prevalence and incidence of blindness within the population. Many national screening programs have digital fundus photography as their basis. In the past 5 years several techniques and adapters have been developed that allow digital fundus photography to be performed using smartphones. We review recent progress in smartphone-based fundus imaging and discuss its potential for integration into national systematic diabetic retinopathy screening programs. Some systems have produced promising initial results with respect to their agreement with reference standards. However further multisite trialling of such systems’ use within implementable screening workflows is required if an evidence base strong enough to affect policy change is to be established. If this were to occur national diabetic retinopathy screening would, for the first time, become possible in low- and middle-income settings where cost and availability of trained eye care personnel are currently key barriers to implementation. As diabetes prevalence and incidence is increasing sharply in these settings, the impact on global blindness could be profound. PMID:26596630

  12. Workflow Optimization in Vertebrobasilar Occlusion

    International Nuclear Information System (INIS)

    Kamper, Lars; Meyn, Hannes; Rybacki, Konrad; Nordmeyer, Simone; Kempkes, Udo; Piroth, Werner; Isenmann, Stefan; Haage, Patrick

    2012-01-01

    Objective: In vertebrobasilar occlusion, rapid recanalization is the only substantial means to improve the prognosis. We introduced a standard operating procedure (SOP) for interventional therapy to analyze the effects on interdisciplinary time management. Methods: Intrahospital time periods between hospital admission and neuroradiological intervention were retrospectively analyzed, together with the patients’ outcome, before (n = 18) and after (n = 20) implementation of the SOP. Results: After implementation of the SOP, we observed statistically significant improvement of postinterventional patient neurological status (p = 0.017). In addition, we found a decrease of 5:33 h for the mean time period from hospital admission until neuroradiological intervention. The recanalization rate increased from 72.2% to 80% after implementation of the SOP. Conclusion: Our results underscore the relevance of SOP implementation and analysis of time management for clinical workflow optimization. Both may trigger awareness for the need of efficient interdisciplinary time management. This could be an explanation for the decreased time periods and improved postinterventional patient status after SOP implementation.

  13. Security aspects in teleradiology workflow

    Science.gov (United States)

    Soegner, Peter I.; Helweg, Gernot; Holzer, Heimo; zur Nedden, Dieter

    2000-05-01

    The medicolegal necessity of privacy, security and confidentiality was the aim of the attempt to develop a secure teleradiology workflow between the telepartners -- radiologist and the referring physician. To avoid the lack of dataprotection and datasecurity we introduced biometric fingerprint scanners in combination with smart cards to identify the teleradiology partners and communicated over an encrypted TCP/IP satellite link between Innsbruck and Reutte. We used an asymmetric kryptography method to guarantee authentification, integrity of the data-packages and confidentiality of the medical data. It was necessary to use a biometric feature to avoid a case of mistaken identity of persons, who wanted access to the system. Only an invariable electronical identification allowed a legal liability to the final report and only a secure dataconnection allowed the exchange of sensible medical data between different partners of Health Care Networks. In our study we selected the user friendly combination of a smart card and a biometric fingerprint technique, called SkymedTM Double Guard Secure Keyboard (Agfa-Gevaert) to confirm identities and log into the imaging workstations and the electronic patient record. We examined the interoperability of the used software with the existing platforms. Only the WIN-XX operating systems could be protected at the time of our study.

  14. Integrative Workflows for Metagenomic Analysis

    Directory of Open Access Journals (Sweden)

    Efthymios eLadoukakis

    2014-11-01

    Full Text Available The rapid evolution of all sequencing technologies, described by the term Next Generation Sequencing (NGS, have revolutionized metagenomic analysis. They constitute a combination of high-throughput analytical protocols, coupled to delicate measuring techniques, in order to potentially discover, properly assemble and map allelic sequences to the correct genomes, achieving particularly high yields for only a fraction of the cost of traditional processes (i.e. Sanger. From a bioinformatic perspective, this boils down to many gigabytes of data being generated from each single sequencing experiment, rendering the management or even the storage, critical bottlenecks with respect to the overall analytical endeavor. The enormous complexity is even more aggravated by the versatility of the processing steps available, represented by the numerous bioinformatic tools that are essential, for each analytical task, in order to fully unveil the genetic content of a metagenomic dataset. These disparate tasks range from simple, nonetheless non-trivial, quality control of raw data to exceptionally complex protein annotation procedures, requesting a high level of expertise for their proper application or the neat implementation of the whole workflow. Furthermore, a bioinformatic analysis of such scale, requires grand computational resources, imposing as the sole realistic solution, the utilization of cloud computing infrastructures. In this review article we discuss different, integrative, bioinformatic solutions available, which address the aforementioned issues, by performing a critical assessment of the available automated pipelines for data management, quality control and annotation of metagenomic data, embracing various, major sequencing technologies and applications.

  15. Metropolitan Schools: Administrative Decentralization vs. Community Control.

    Science.gov (United States)

    Ornstein, Allan C.

    This book is divided into four chapters. The first examines the concepts and issues related to understanding social systems and how the schools can be viewed as a social system. The differences between centralization and decentralization, as well as systems-analysis and management-control approaches are also explored. In the next chapter, we are…

  16. Quotas and Decentralization (Indonesia) | CRDI - Centre de ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Over the past decade, the Government of Indonesia has taken steps to enhance the participation of women in public office. One element of the strategy is decentralization, promoted under the slogan, "local autonomy for people empowerment and welfare." Support for gender mainstreaming was proclaimed and in 2003 an ...

  17. Decentralized forest governance in central Vietnam

    NARCIS (Netherlands)

    Tran Nam, T.; Burgers, P.P.M.

    2012-01-01

    A major challenge in decentralized forest governance in Vietnam is developing a mechanism that would support both reforestation and poverty reduction among people in rural communities. To help address this challenge, Forest Land Allocation (FLA) policies recognize local communities and individuals

  18. Making decentralization work for women in Uganda

    NARCIS (Netherlands)

    Lakwo, A.

    2009-01-01

    This book is about engendering local governance. It explores the euphoria with which Uganda's decentralization policy took centre stage as a sufficient driver to engender local development responsiveness and accountability. Using a case study of AFARD in Nebbi district, it shows first that

  19. Decentralized Development Planning and Fragmentation of ...

    African Journals Online (AJOL)

    Using the Greater Accra Metropolitan Area (GAMA) as a case study, this paper argues that the proliferation of autonomous local government areas within the context of urban sprawl and other challenges have inhibited metropolitan-wide development planning. Keywords: Decentralization; local government; urban growth; ...

  20. Promoting Research for Policymaking under Decentralization in ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Major reforms in governance carried out in Peru during the early 2000s saw the decentralization of functions and resources from the central government in Lima to elected regional governments. Today, regional governments account for the majority of public investment, but show limited capacity to cope with the ...

  1. Decentralized Development Planning and Fragmentation of ...

    African Journals Online (AJOL)

    Using the GAMA as a case study, this paper examines the proliferation of .... These spatial definitions give territorial meaning to decentralization as dis- ... Formulated and implemented under a military regime, the Provisional ..... increased to four in 2004 following the creation of new districts in the country, and as part of.

  2. Leadership and the Decentralized Control of Schools

    Science.gov (United States)

    Steinberg, Matthew P.

    2013-01-01

    This review examines the literature related to leadership and the decentralized control of schools. It first considers the distinctive goals of public and private agencies, the specific constraints that shape the autonomy of leaders in different sectors, and the ways in which new models of public management are infusing public agencies with…

  3. Decentralized Networked Control of Building Structures

    Czech Academy of Sciences Publication Activity Database

    Bakule, Lubomír; Rehák, Branislav; Papík, Martin

    2016-01-01

    Roč. 31, č. 11 (2016), s. 871-886 ISSN 1093-9687 R&D Projects: GA ČR GA13-02149S Institutional support: RVO:67985556 Keywords : decentralized control * networked control * building structures Subject RIV: BC - Control Systems Theory Impact factor: 5.786, year: 2016

  4. Women's Political Representation and Participation in Decentralized ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Huairou Commission User

    facilitate people's participation in national development through ensuring sound local level politics. • RC evolved into local councils which then led to the implementation of decentralization through the local government act (1997). • This policy has provided opportunities for women to participate in local leadership from.

  5. Decentralized indirect methods for learning automata games.

    Science.gov (United States)

    Tilak, Omkar; Martin, Ryan; Mukhopadhyay, Snehasis

    2011-10-01

    We discuss the application of indirect learning methods in zero-sum and identical payoff learning automata games. We propose a novel decentralized version of the well-known pursuit learning algorithm. Such a decentralized algorithm has significant computational advantages over its centralized counterpart. The theoretical study of such a decentralized algorithm requires the analysis to be carried out in a nonstationary environment. We use a novel bootstrapping argument to prove the convergence of the algorithm. To our knowledge, this is the first time that such analysis has been carried out for zero-sum and identical payoff games. Extensive simulation studies are reported, which demonstrate the proposed algorithm's fast and accurate convergence in a variety of game scenarios. We also introduce the framework of partial communication in the context of identical payoff games of learning automata. In such games, the automata may not communicate with each other or may communicate selectively. This comprehensive framework has the capability to model both centralized and decentralized games discussed in this paper.

  6. Decentralized Reinforcement Learning of robot behaviors

    NARCIS (Netherlands)

    Leottau, David L.; Ruiz-del-Solar, Javier; Babuska, R.

    2018-01-01

    A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In

  7. Critical Systems Thinking on Decentralization: the Corporate ...

    African Journals Online (AJOL)

    This article calls for the devolution of power by large organizations to their subsidiaries or subordinate units – mainly Strategic Business Units (SBUs). It proposes more decentralized models of management and outlines a new theory taking a critical systems thinking approach. Corporations are advised to attack and ...

  8. Strategies for organizing training: centralized or decentralized

    International Nuclear Information System (INIS)

    Kanous, L.E.

    1979-01-01

    STudies were conducted in the Detroit Edison Company for the purpose of determining effectiveness of training. A systems approach from the corporate perspective was found to be needed and worthwhile. At the conclusion of these studies a decision was made to move in the direction of a centralized vs decentralized organizational strategy for training

  9. Decentralization and Living Conditions in the EU

    NARCIS (Netherlands)

    Vries, M.S. de; Goymen, K.; Sazak, O.

    2014-01-01

    This paper investigates the effects of decentralization on living conditions in core cities in the European Union. It uses data from the Urban Audit to investigate whether the level of local expenditures relative to central government expenditures has an impact on the subjective appreciation of

  10. Satellite Power System (SPS) centralization/decentralization

    Science.gov (United States)

    Naisbitt, J.

    1978-01-01

    The decentralization of government in the United States of America is described and its effect on the solution of energy problems is given. The human response to the introduction of new technologies is considered as well as the behavioral aspects of multiple options.

  11. Towards a Decentralized Magnetic Indoor Positioning System

    Science.gov (United States)

    Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg

    2015-01-01

    Decentralized magnetic indoor localization is a sophisticated method for processing sampled magnetic data directly on a mobile station (MS), thereby decreasing or even avoiding the need for communication with the base station. In contrast to central-oriented positioning systems, which transmit raw data to a base station, decentralized indoor localization pushes application-level knowledge into the MS. A decentralized position solution has thus a strong feasibility to increase energy efficiency and to prolong the lifetime of the MS. In this article, we present a complete architecture and an implementation for a decentralized positioning system. Furthermore, we introduce a technique for the synchronization of the observed magnetic field on the MS with the artificially-generated magnetic field from the coils. Based on real-time clocks (RTCs) and a preemptive operating system, this method allows a stand-alone control of the coils and a proper assignment of the measured magnetic fields on the MS. A stand-alone control and synchronization of the coils and the MS have an exceptional potential to implement a positioning system without the need for wired or wireless communication and enable a deployment of applications for rescue scenarios, like localization of miners or firefighters. PMID:26690145

  12. Towards a Decentralized Magnetic Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Zakaria Kasmi

    2015-12-01

    Full Text Available Decentralized magnetic indoor localization is a sophisticated method for processing sampled magnetic data directly on a mobile station (MS, thereby decreasing or even avoiding the need for communication with the base station. In contrast to central-oriented positioning systems, which transmit raw data to a base station, decentralized indoor localization pushes application-level knowledge into the MS. A decentralized position solution has thus a strong feasibility to increase energy efficiency and to prolong the lifetime of the MS. In this article, we present a complete architecture and an implementation for a decentralized positioning system. Furthermore, we introduce a technique for the synchronization of the observed magnetic field on the MS with the artificially-generated magnetic field from the coils. Based on real-time clocks (RTCs and a preemptive operating system, this method allows a stand-alone control of the coils and a proper assignment of the measured magnetic fields on the MS. A stand-alone control and synchronization of the coils and the MS have an exceptional potential to implement a positioning system without the need for wired or wireless communication and enable a deployment of applications for rescue scenarios, like localization of miners or firefighters.

  13. Decentralized data fusion with inverse covariance intersection

    NARCIS (Netherlands)

    Noack, B.; Sijs, J.; Reinhardt, M.; Hanebeck, U.D.

    2017-01-01

    In distributed and decentralized state estimation systems, fusion methods are employed to systematically combine multiple estimates of the state into a single, more accurate estimate. An often encountered problem in the fusion process relates to unknown common information that is shared by the

  14. Decentralization Fails Women in Sudan | IDRC - International ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-11-05

    Nov 5, 2010 ... In Sudan, decentralization is a process that has occurred over time and is ... In northern Sudan, some women travel three days to reach the nearest hospital. ... Accord stipulate that basic education is free, “in real life, it is not.”.

  15. PeerMatcher: Decentralized Partnership Formation

    NARCIS (Netherlands)

    Bozdog, N.V.; Voulgaris, S.; Bal, H.E.; van Halteren, A.

    2015-01-01

    This paper presents PeerMatcher, a fully decentralized algorithm solving the k-clique matching problem. The aim of k-clique matching is to cluster a set of nodes having pairwise weights into k-size groups of maximal total weight. Since solving the problem requires exponential time, PeerMatcher

  16. Exploring Two Approaches for an End-to-End Scientific Analysis Workflow

    Science.gov (United States)

    Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; Paterno, Marc; Sehrish, Saba

    2015-12-01

    The scientific discovery process can be advanced by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally, it is important for scientists to be able to share their workflows with collaborators. We have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC); the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In this paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.

  17. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Workflow Based Software Development Environment, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of this proposed research is to investigate and develop a workflow based tool, the Software Developers Assistant, to facilitate the collaboration between...

  19. COSMOS: Python library for massively parallel workflows.

    Science.gov (United States)

    Gafni, Erik; Luquette, Lovelace J; Lancaster, Alex K; Hawkins, Jared B; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P; Tonellato, Peter J

    2014-10-15

    Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  20. Implementing Workflow Reconfiguration in WS-BPEL

    DEFF Research Database (Denmark)

    Mazzara, Manuel; Dragoni, Nicola; Zhou, Mu

    2012-01-01

    This paper investigates the problem of dynamic reconfiguration by means of a workflow-based case study used for discussion. We state the requirements on a system implementing the workflow and its reconfiguration, and we describe the system’s design in BPMN. WS-BPEL, a language that would not natu......This paper investigates the problem of dynamic reconfiguration by means of a workflow-based case study used for discussion. We state the requirements on a system implementing the workflow and its reconfiguration, and we describe the system’s design in BPMN. WS-BPEL, a language that would...... not naturally support dynamic change, is used as a target for implementation. The WS-BPEL recovery framework is here exploited to implement the reconfiguration using principles derived from previous research in process algebra and two mappings from BPMN to WS-BPEL are presented, one automatic and only mostly...

  1. Logical provenance in data-oriented workflows?

    KAUST Repository

    Ikeda, R.; Das Sarma, Akash; Widom, J.

    2013-01-01

    for general transformations, introducing the notions of correctness, precision, and minimality. We then determine when properties such as correctness and minimality carry over from the individual transformations' provenance to the workflow provenance. We

  2. A Multilevel Secure Workflow Management System

    National Research Council Canada - National Science Library

    Kang, Myong H; Froscher, Judith N; Sheth, Amit P; Kochut, Krys J; Miller, John A

    1999-01-01

    The Department of Defense (DoD) needs multilevel secure (MLS) workflow management systems to enable globally distributed users and applications to cooperate across classification levels to achieve mission critical goals...

  3. Workflow Based Software Development Environment, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of this proposed research is to investigate and develop a workflow based tool, the Software Developers Assistant, to facilitate the collaboration between...

  4. Integrated workflows for spiking neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Ján eAntolík

    2013-12-01

    Full Text Available The increasing availability of computational resources is enabling more detailed, realistic modelling in computational neuroscience, resulting in a shift towards more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeller's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modellers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity.To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualisation into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organised configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualisation stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modelling studies by relieving the user from manual handling of the flow of metadata between the individual

  5. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    Directory of Open Access Journals (Sweden)

    Hyunjoo Kim

    2011-01-01

    Full Text Available In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints. The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.

  6. Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm

    Science.gov (United States)

    Suleiman, Wassim; Pesavento, Marius; Zoubir, Abdelhak M.

    2016-05-01

    In this paper, we consider performance analysis of the decentralized power method for the eigendecomposition of the sample covariance matrix based on the averaging consensus protocol. An analytical expression of the second order statistics of the eigenvectors obtained from the decentralized power method which is required for computing the mean square error (MSE) of subspace-based estimators is presented. We show that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations. Moreover, we introduce the decentralized ESPRIT algorithm which yields fully decentralized direction-of-arrival (DOA) estimates. Based on the performance analysis of the decentralized power method, we derive an analytical expression of the MSE of DOA estimators using the decentralized ESPRIT algorithm. The validity of our asymptotic results is demonstrated by simulations.

  7. Decentralization of Health System in Islamic Republic of Iran

    Directory of Open Access Journals (Sweden)

    MJ Kabir

    2008-10-01

    Full Text Available Decentralization is the process of dispersing decision-making closer to the point of peripheral area, service or action. Basically decentralized governance, if properly planned and implemented, offers important opportunities for enhanced human development. The studies about this issue in different countries show that most of the decentralizations have been implemented in European countries and in comparison, the Middle East countries have been utilized lower degrees of the decentralization process. In fact, decentralization in the health system is a policy pursued for a variety of purposes including; increase in service delivery effectiveness and equity, improving efficiency and quality, fairness of financial contribution and planning for choosing the most appropriate interventions for the health priorities in peripheral regions. To implement decentralized governance, there is a spectrum of different choices that the government should regulate their degrees. Providing an appropriate atmosphere for decentralization is essential, otherwise lack of planning and achievement can result in complications for the system.

  8. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  9. Optimizing perioperative decision making: improved information for clinical workflow planning.

    Science.gov (United States)

    Doebbeling, Bradley N; Burton, Matthew M; Wiebke, Eric A; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph

    2012-01-01

    Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40-70% of hospital revenues and 30-40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction.

  10. Access Control with Delegated Authorization Policy Evaluation for Data-Driven Microservice Workflows

    Directory of Open Access Journals (Sweden)

    Davy Preuveneers

    2017-09-01

    Full Text Available Microservices offer a compelling competitive advantage for building data flow systems as a choreography of self-contained data endpoints that each implement a specific data processing functionality. Such a ‘single responsibility principle’ design makes them well suited for constructing scalable and flexible data integration and real-time data flow applications. In this paper, we investigate microservice based data processing workflows from a security point of view, i.e., (1 how to constrain data processing workflows with respect to dynamic authorization policies granting or denying access to certain microservice results depending on the flow of the data; (2 how to let multiple microservices contribute to a collective data-driven authorization decision and (3 how to put adequate measures in place such that the data within each individual microservice is protected against illegitimate access from unauthorized users or other microservices. Due to this multifold objective, enforcing access control on the data endpoints to prevent information leakage or preserve one’s privacy becomes far more challenging, as authorization policies can have dependencies and decision outcomes cross-cutting data in multiple microservices. To address this challenge, we present and evaluate a workflow-oriented authorization framework that enforces authorization policies in a decentralized manner and where the delegated policy evaluation leverages feature toggles that are managed at runtime by software circuit breakers to secure the distributed data processing workflows. The benefit of our solution is that, on the one hand, authorization policies restrict access to the data endpoints of the microservices, and on the other hand, microservices can safely rely on other data endpoints to collectively evaluate cross-cutting access control decisions without having to rely on a shared storage backend holding all the necessary information for the policy evaluation.

  11. DEWEY: the DICOM-enabled workflow engine system.

    Science.gov (United States)

    Erickson, Bradley J; Langer, Steve G; Blezek, Daniel J; Ryan, William J; French, Todd L

    2014-06-01

    Workflow is a widely used term to describe the sequence of steps to accomplish a task. The use of workflow technology in medicine and medical imaging in particular is limited. In this article, we describe the application of a workflow engine to improve workflow in a radiology department. We implemented a DICOM-enabled workflow engine system in our department. We designed it in a way to allow for scalability, reliability, and flexibility. We implemented several workflows, including one that replaced an existing manual workflow and measured the number of examinations prepared in time without and with the workflow system. The system significantly increased the number of examinations prepared in time for clinical review compared to human effort. It also met the design goals defined at its outset. Workflow engines appear to have value as ways to efficiently assure that complex workflows are completed in a timely fashion.

  12. The PBase Scientific Workflow Provenance Repository

    Directory of Open Access Journals (Sweden)

    Víctor Cuevas-Vicenttín

    2014-10-01

    Full Text Available Scientific workflows and their supporting systems are becoming increasingly popular for compute-intensive and data-intensive scientific experiments. The advantages scientific workflows offer include rapid and easy workflow design, software and data reuse, scalable execution, sharing and collaboration, and other advantages that altogether facilitate “reproducible science”. In this context, provenance – information about the origin, context, derivation, ownership, or history of some artifact – plays a key role, since scientists are interested in examining and auditing the results of scientific experiments. However, in order to perform such analyses on scientific results as part of extended research collaborations, an adequate environment and tools are required. Concretely, the need arises for a repository that will facilitate the sharing of scientific workflows and their associated execution traces in an interoperable manner, also enabling querying and visualization. Furthermore, such functionality should be supported while taking performance and scalability into account. With this purpose in mind, we introduce PBase: a scientific workflow provenance repository implementing the ProvONE proposed standard, which extends the emerging W3C PROV standard for provenance data with workflow specific concepts. PBase is built on the Neo4j graph database, thus offering capabilities such as declarative and efficient querying. Our experiences demonstrate the power gained by supporting various types of queries for provenance data. In addition, PBase is equipped with a user friendly interface tailored for the visualization of scientific workflow provenance data, making the specification of queries and the interpretation of their results easier and more effective.

  13. Multilevel Workflow System in the ATLAS Experiment

    International Nuclear Information System (INIS)

    Borodin, M; De, K; Navarro, J Garcia; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA - the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation. (paper)

  14. (DeCentralization of the Global Informational Ecosystem

    Directory of Open Access Journals (Sweden)

    Johanna Möller

    2017-09-01

    Full Text Available Centralization and decentralization are key concepts in debates that focus on the (antidemocratic character of digital societies. Centralization is understood as the control over communication and data flows, and decentralization as giving it (back to users. Communication and media research focuses on centralization put forward by dominant digital media platforms, such as Facebook and Google, and governments. Decentralization is investigated regarding its potential in civil society, i.e., hacktivism, (encryption technologies, and grass-root technology movements. As content-based media companies increasingly engage with technology, they move into the focus of critical media studies. Moreover, as formerly nationally oriented companies now compete with global media platforms, they share several interests with civil society decentralization agents. Based on 26 qualitative interviews with leading media managers, we investigate (decentralization strategies applied by content-oriented media companies. Theoretically, this perspective on media companies as agents of (decentralization expands (decentralization research beyond traditional democratic stakeholders by considering economic actors within the “global informational ecosystem” (Birkinbine, Gómez, & Wasko, 2017. We provide a three-dimensional framework to empirically investigate (decentralization. From critical media studies, we borrow the (decentralization of data and infrastructures, from media business research, the (decentralization of content distribution.

  15. Progress in digital color workflow understanding in the International Color Consortium (ICC) Workflow WG

    Science.gov (United States)

    McCarthy, Ann

    2006-01-01

    The ICC Workflow WG serves as the bridge between ICC color management technologies and use of those technologies in real world color production applications. ICC color management is applicable to and is used in a wide range of color systems, from highly specialized digital cinema color special effects to high volume publications printing to home photography. The ICC Workflow WG works to align ICC technologies so that the color management needs of these diverse use case systems are addressed in an open, platform independent manner. This report provides a high level summary of the ICC Workflow WG objectives and work to date, focusing on the ways in which workflow can impact image quality and color systems performance. The 'ICC Workflow Primitives' and 'ICC Workflow Patterns and Dimensions' workflow models are covered in some detail. Consider the questions, "How much of dissatisfaction with color management today is the result of 'the wrong color transformation at the wrong time' and 'I can't get to the right conversion at the right point in my work process'?" Put another way, consider how image quality through a workflow can be negatively affected when the coordination and control level of the color management system is not sufficient.

  16. A method to mine workflows from provenance for assisting scientific workflow composition

    NARCIS (Netherlands)

    Zeng, R.; He, X.; Aalst, van der W.M.P.

    2011-01-01

    Scientific workflows have recently emerged as a new paradigm for representing and managing complex distributed scientific computations and are used to accelerate the pace of scientific discovery. In many disciplines, individual workflows are large and complicated due to the large quantities of data

  17. Centralization vs. Decentralization in Medical School Libraries

    Science.gov (United States)

    Crawford, Helen

    1966-01-01

    Does the medical school library in the United States operate more commonly under the university library or the medical school administration? University-connected medical school libraries were asked to indicate (a) the source of their budgets, whether from the central library or the medical school, and (b) the responsibility for their acquisitions and cataloging. Returns received from sixtyeight of the seventy eligible institutions showed decentralization to be much the most common: 71 percent of the libraries are funded by their medical schools; 79 percent are responsible for their own acquisitions and processing. The factor most often associated with centralization of both budget and operation is public ownership. Decentralization is associated with service to one or two rather than three or more professional schools. Location of the medical school in a different city from the university is highly favorable to autonomy. Other factors associated with these trends are discussed. PMID:5945568

  18. Asynchronous decentralized method for interconnected electricity markets

    International Nuclear Information System (INIS)

    Huang, Anni; Joo, Sung-Kwan; Song, Kyung-Bin; Kim, Jin-Ho; Lee, Kisung

    2008-01-01

    This paper presents an asynchronous decentralized method to solve the optimization problem of interconnected electricity markets. The proposed method decomposes the optimization problem of combined electricity markets into individual optimization problems. The impact of neighboring markets' information is included in the objective function of the individual market optimization problem by the standard Lagrangian relaxation method. Most decentralized optimization methods use synchronous models of communication to exchange updated market information among markets during the iterative process. In this paper, however, the solutions of the individual optimization problems are coordinated through an asynchronous communication model until they converge to the global optimal solution of combined markets. Numerical examples are presented to demonstrate the advantages of the proposed asynchronous method over the existing synchronous methods. (author)

  19. Centralization vs. decentralization in medical school libraries.

    Science.gov (United States)

    Crawford, H

    1966-07-01

    Does the medical school library in the United States operate more commonly under the university library or the medical school administration? University-connected medical school libraries were asked to indicate (a) the source of their budgets, whether from the central library or the medical school, and (b) the responsibility for their acquisitions and cataloging. Returns received from sixtyeight of the seventy eligible institutions showed decentralization to be much the most common: 71 percent of the libraries are funded by their medical schools; 79 percent are responsible for their own acquisitions and processing. The factor most often associated with centralization of both budget and operation is public ownership. Decentralization is associated with service to one or two rather than three or more professional schools. Location of the medical school in a different city from the university is highly favorable to autonomy. Other factors associated with these trends are discussed.

  20. On the Feasibility of Decentralized Derivatives Markets

    OpenAIRE

    Eskandari, Shayan; Clark, Jeremy; Sundaresan, Vignesh; Adham, Moe

    2018-01-01

    In this paper, we present Velocity, a decentralized market deployed on Ethereum for trading a custom type of derivative option. To enable the smart contract to work, we also implement a price fetching tool called PriceGeth. We present this as a case study, noting challenges in development of the system that might be of independent interest to whose working on smart contract implementations. We also apply recent academic results on the security of the Solidity smart contract language in valida...

  1. Quantifying risk for decentralized offensive cyber operations

    OpenAIRE

    Klipstein, Michael S.

    2017-01-01

    Approved for public release; distribution is unlimited Includes supplementary material. Reissued 7 Sep 2017 with corrections to committee titles. Computer networks and the amount of information stored within government computer networks have become ubiquitous. With the possible decentralization of authorities to conduct offensive cyber operations, leaders and their respective staffs of organizations below the national level cannot adequately assess risks and consequences of these ope...

  2. Grey water reclamation by decentralized MBR prototype

    OpenAIRE

    Santasmasas Rubiralta, Carme; Rovira Boixaderas, Miquel; Clarens Blanco, Frederic; Valderrama Angel, César Alberto

    2013-01-01

    Grey water treatment and reuse for non-drinking water requirements has become of great interest in arid and semi-arid zones where water resources are becoming both quantitatively and qualitatively scarce. In this study a decentralized and automatic MBR prototype has been designed and installed in the REMOSA facilities for treatment of low-load grey water to be recycled in flushing-toilet application. The recycling treatment of grey water comprises four stages: screening, biological oxidation,...

  3. Decentralized robust control design using LMI

    Directory of Open Access Journals (Sweden)

    Dušan Krokavec

    2008-03-01

    Full Text Available The paper deals with application of decentralized controllers for large-scale systems with subsystems interaction and system matrices uncertainties. The desired stability of the whole system is guaranteed while at the same time the tolerable bounds in the uncertainties due to structural changes are maximized. The design approach is based on the linear matrix inequalities (LMI techniques adaptation for stabilizing controller design.

  4. Analyzing Von Neumann machines using decentralized symmetries

    Science.gov (United States)

    Fang, Jie

    2013-10-01

    The artificial intelligence method to e-business is defined not only by the study of fiber-optic cables, but also by the unproven need for vacuum tubes. Given the current status of virtual archetypes, theorists clearly desire the exploration of semaphores, which embodies the compelling principles of cryptoanalysis. We present an algorithm for probabilistic theory (Buck), which we use to disprove that write-back caches can be made decentralized, lossless, and reliable.

  5. The effects of fiscal decentralization in Albania

    OpenAIRE

    Dr.Sc. Blerta Dragusha; Dr.Sc. Elez Osmani

    2012-01-01

    “Basically decentralization is a democratic reform which seeks to transfer the political, administrative, financial and planning authority from central to local government. It seeks to develop civic participation, empowerment of local people in decision making process and to promote accountability and reliability: To achieve efficiency and effectiveness in the collection and management of resources and service delivery”1 The interest and curiosity of knowing how our country is doing in th...

  6. Environmental aspects of decentralized electricity production

    International Nuclear Information System (INIS)

    Henry, J.P.

    1991-01-01

    Renewable energy sources are the focus of considerable interest because they do not place future generations at risk; the development of cogeneration has been favorably received on the whole because it uses energy that would otherwise be lost. Difficulties are sometimes encountered in the development of small-scale hydroelectric facilities (older facilities negative aspects, over production impression in France, etc.). Environmental protection regulations do not distinguish between centralized and decentralized electricity production, but between large and small production facilities

  7. Witnet: A Decentralized Oracle Network Protocol

    OpenAIRE

    de Pedro, Adán Sánchez; Levi, Daniele; Cuende, Luis Iván

    2017-01-01

    Witnet is a decentralized oracle network (DON) that connects smart contracts to the outer world. Generally speaking, it allows any piece of software to retrieve the contents published at any web address at a certain point in time, with complete and verifiable proof of its integrity and without blindly trusting any third party. Witnet runs on a blockchain with a native protocol token (called Wit), which miners-called witnesses-earn by retrieving, attesting and delivering web contents for clien...

  8. The MPO system for automatic workflow documentation

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G.; Coviello, E.N.; Flanagan, S.M. [General Atomics, P.O. Box 85608, San Diego, CA 92186-5608 (United States); Greenwald, M. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Lee, X. [General Atomics, P.O. Box 85608, San Diego, CA 92186-5608 (United States); Romosan, A. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Schissel, D.P., E-mail: schissel@fusion.gat.com [General Atomics, P.O. Box 85608, San Diego, CA 92186-5608 (United States); Shoshani, A. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Stillerman, J.; Wright, J. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Wu, K.J. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-11-15

    Highlights: • Data model, infrastructure, and tools for data tracking, cataloging, and integration. • Automatically document workflow and data provenance in the widest sense. • Fusion Science as test bed but the system’s framework and data model is quite general. - Abstract: Data from large-scale experiments and extreme-scale computing is expensive to produce and may be used for critical applications. However, it is not the mere existence of data that is important, but our ability to make use of it. Experience has shown that when metadata is better organized and more complete, the underlying data becomes more useful. Traditionally, capturing the steps of scientific workflows and metadata was the role of the lab notebook, but the digital era has resulted instead in the fragmentation of data, processing, and annotation. This paper presents the Metadata, Provenance, and Ontology (MPO) System, the software that can automate the documentation of scientific workflows and associated information. Based on recorded metadata, it provides explicit information about the relationships among the elements of workflows in notebook form augmented with directed acyclic graphs. A set of web-based graphical navigation tools and Application Programming Interface (API) have been created for searching and browsing, as well as programmatically accessing the workflows and data. We describe the MPO concepts and its software architecture. We also report the current status of the software as well as the initial deployment experience.

  9. Integrating configuration workflows with project management system

    International Nuclear Information System (INIS)

    Nilsen, Dimitri; Weber, Pavel

    2014-01-01

    The complexity of the heterogeneous computing resources, services and recurring infrastructure changes at the GridKa WLCG Tier-1 computing center require a structured approach to configuration management and optimization of interplay between functional components of the whole system. A set of tools deployed at GridKa, including Puppet, Redmine, Foreman, SVN and Icinga, provides the administrative environment giving the possibility to define and develop configuration workflows, reduce the administrative effort and improve sustainable operation of the whole computing center. In this presentation we discuss the developed configuration scenarios implemented at GridKa, which we use for host installation, service deployment, change management procedures, service retirement etc. The integration of Puppet with a project management tool like Redmine provides us with the opportunity to track problem issues, organize tasks and automate these workflows. The interaction between Puppet and Redmine results in automatic updates of the issues related to the executed workflow performed by different system components. The extensive configuration workflows require collaboration and interaction between different departments like network, security, production etc. at GridKa. Redmine plugins developed at GridKa and integrated in its administrative environment provide an effective way of collaboration within the GridKa team. We present the structural overview of the software components, their connections, communication protocols and show a few working examples of the workflows and their automation.

  10. The MPO system for automatic workflow documentation

    International Nuclear Information System (INIS)

    Abla, G.; Coviello, E.N.; Flanagan, S.M.; Greenwald, M.; Lee, X.; Romosan, A.; Schissel, D.P.; Shoshani, A.; Stillerman, J.; Wright, J.; Wu, K.J.

    2016-01-01

    Highlights: • Data model, infrastructure, and tools for data tracking, cataloging, and integration. • Automatically document workflow and data provenance in the widest sense. • Fusion Science as test bed but the system’s framework and data model is quite general. - Abstract: Data from large-scale experiments and extreme-scale computing is expensive to produce and may be used for critical applications. However, it is not the mere existence of data that is important, but our ability to make use of it. Experience has shown that when metadata is better organized and more complete, the underlying data becomes more useful. Traditionally, capturing the steps of scientific workflows and metadata was the role of the lab notebook, but the digital era has resulted instead in the fragmentation of data, processing, and annotation. This paper presents the Metadata, Provenance, and Ontology (MPO) System, the software that can automate the documentation of scientific workflows and associated information. Based on recorded metadata, it provides explicit information about the relationships among the elements of workflows in notebook form augmented with directed acyclic graphs. A set of web-based graphical navigation tools and Application Programming Interface (API) have been created for searching and browsing, as well as programmatically accessing the workflows and data. We describe the MPO concepts and its software architecture. We also report the current status of the software as well as the initial deployment experience.

  11. Query Optimizations over Decentralized RDF Graphs

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-05-18

    Applications in life sciences, decentralized social networks, Internet of Things, and statistical linked dataspaces integrate data from multiple decentralized RDF graphs via SPARQL queries. Several approaches have been proposed to optimize query processing over a small number of heterogeneous data sources by utilizing schema information. In the case of schema similarity and interlinks among sources, these approaches cause unnecessary data retrieval and communication, leading to poor scalability and response time. This paper addresses these limitations and presents Lusail, a system for scalable and efficient SPARQL query processing over decentralized graphs. Lusail achieves scalability and low query response time through various optimizations at compile and run times. At compile time, we use a novel locality-aware query decomposition technique that maximizes the number of query triple patterns sent together to a source based on the actual location of the instances satisfying these triple patterns. At run time, we use selectivity-awareness and parallel query execution to reduce network latency and to increase parallelism by delaying the execution of subqueries expected to return large results. We evaluate Lusail using real and synthetic benchmarks, with data sizes up to billions of triples on an in-house cluster and a public cloud. We show that Lusail outperforms state-of-the-art systems by orders of magnitude in terms of scalability and response time.

  12. Data intensive ATLAS workflows in the Cloud

    CERN Document Server

    Rzehorz, Gerhard Ferdinand; The ATLAS collaboration

    2018-01-01

    From 2025 onwards, the ATLAS collaboration at the Large Hadron Collider (LHC) at CERN will experience a massive increase in data quantity as well as complexity. Including mitigating factors, the prevalent computing power by that time will only fulfil one tenth of the requirement. This contribution will focus on Cloud computing as an approach to help overcome this challenge by providing flexible hardware that can be configured to the specific needs of a workflow. Experience with Cloud computing exists, but there is a large uncertainty if and to which degree it can be able to reduce the burden by 2025. In order to understand and quantify the benefits of Cloud computing, the "Workflow and Infrastructure Model" was created. It estimates the viability of Cloud computing by combining different inputs from the workflow side with infrastructure specifications. The model delivers metrics that enable the comparison of different Cloud configurations as well as different Cloud offerings with each other. A wide range of r...

  13. Logical provenance in data-oriented workflows?

    KAUST Repository

    Ikeda, R.

    2013-04-01

    We consider the problem of defining, generating, and tracing provenance in data-oriented workflows, in which input data sets are processed by a graph of transformations to produce output results. We first give a new general definition of provenance for general transformations, introducing the notions of correctness, precision, and minimality. We then determine when properties such as correctness and minimality carry over from the individual transformations\\' provenance to the workflow provenance. We describe a simple logical-provenance specification language consisting of attribute mappings and filters. We provide an algorithm for provenance tracing in workflows where logical provenance for each transformation is specified using our language. We consider logical provenance in the relational setting, observing that for a class of Select-Project-Join (SPJ) transformations, logical provenance specifications encode minimal provenance. We have built a prototype system supporting the features and algorithms presented in the paper, and we report a few preliminary experimental results. © 2013 IEEE.

  14. Impact of CGNS on CFD Workflow

    Science.gov (United States)

    Poinot, M.; Rumsey, C. L.; Mani, M.

    2004-01-01

    CFD tools are an integral part of industrial and research processes, for which the amount of data is increasing at a high rate. These data are used in a multi-disciplinary fluid dynamics environment, including structural, thermal, chemical or even electrical topics. We show that the data specification is an important challenge that must be tackled to achieve an efficient workflow for use in this environment. We compare the process with other software techniques, such as network or database type, where past experiences showed how difficult it was to bridge the gap between completely general specifications and dedicated specific applications. We show two aspects of the use of CFD General Notation System (CGNS) that impact CFD workflow: as a data specification framework and as a data storage means. Then, we give examples of projects involving CFD workflows where the use of the CGNS standard leads to a useful method either for data specification, exchange, or storage.

  15. Quantitative analysis of probabilistic BPMN workflows

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2012-01-01

    We present a framework for modelling and analysis of realworld business workflows. We present a formalised core subset of the Business Process Modelling and Notation (BPMN) and then proceed to extend this language with probabilistic nondeterministic branching and general-purpose reward annotations...... of events, reward-based properties and best- and worst- case scenarios. We develop a simple example of medical workflow and demonstrate the utility of this analysis in accurate provisioning of drug stocks. Finally, we suggest a path to building upon these techniques to cover the entire BPMN language, allow...... for more complex annotations and ultimately to automatically synthesise workflows by composing predefined sub-processes, in order to achieve a configuration that is optimal for parameters of interest....

  16. EFFECT OF FISCAL DECENTRALIZATION ON CAPITAL EXPENDITURE, GROWTH, AND WELFARE

    OpenAIRE

    Badrudin, Rudy

    2013-01-01

    This research analyzes the influence of fiscal decentralization on capital expenditure, economic growth, and social welfare of 29 regencies and 6 cities in Central Java Province based on the data of year 2004 to 2008. The method used to analyze the hypotheses is the Partial Least Square. The results showes that fiscal decentralization has no significant effect on capital expenditure; fiscal decentralization has significant effect on economic growth and social welfare; capital expenditure has ...

  17. Comparison of centralized and decentralized energy supply systems

    OpenAIRE

    Pfeifer, Thomas; Fahl, Ulrich; Voß, Alfred

    1991-01-01

    Communal energy programs are often embedded in a conception of a decentralized energy supply system where electricity is produced by a number of smaller power plants. For a comprehensive survey the question arises whether these decentralized systems are more advantageous than centralized systems with regard to the criterions energy consumption, safety of supply, environmental compatibility and economy. In the following, after a definition of the term "decentralized", the present structure of ...

  18. (De)centralization of the global informational ecosystem

    OpenAIRE

    Möller, Johanna; Rimscha, M. Bjørn von

    2017-01-01

    Centralization and decentralization are key concepts in debates that focus on the (anti)democratic character of digital societies. Centralization is understood as the control over communication and data flows, and decentralization as giving it (back) to users. Communication and media research focuses on centralization put forward by dominant digital media platforms, such as Facebook and Google, and governments. Decentralization is investigated regarding its potential in civil society, i.e., h...

  19. Centralized vs. de-centralized multinationals and taxes

    OpenAIRE

    Nielsen, Søren Bo; Raimondos-Møller, Pascalis; Schjelderup, Guttorm

    2005-01-01

    The paper examines how country tax differences affect a multinational enterprise's choice to centralize or de-centralize its decision structure. Within a simple model that emphasizes the multiple conflicting roles of transfer prices in MNEs – here, as a strategic pre-commitment device and a tax manipulation instrument –, we show that (de-)centralized decisions are more profitable when tax differentials are (small) large. Keywords: Centralized vs. de-centralized decisions, taxes, MNEs. ...

  20. Safety and feasibility of STAT RAD: Improvement of a novel rapid tomotherapy-based radiation therapy workflow by failure mode and effects analysis.

    Science.gov (United States)

    Jones, Ryan T; Handsfield, Lydia; Read, Paul W; Wilson, David D; Van Ausdal, Ray; Schlesinger, David J; Siebers, Jeffrey V; Chen, Quan

    2015-01-01

    The clinical challenge of radiation therapy (RT) for painful bone metastases requires clinicians to consider both treatment efficacy and patient prognosis when selecting a radiation therapy regimen. The traditional RT workflow requires several weeks for common palliative RT schedules of 30 Gy in 10 fractions or 20 Gy in 5 fractions. At our institution, we have created a new RT workflow termed "STAT RAD" that allows clinicians to perform computed tomographic (CT) simulation, planning, and highly conformal single fraction treatment delivery within 2 hours. In this study, we evaluate the safety and feasibility of the STAT RAD workflow. A failure mode and effects analysis (FMEA) was performed on the STAT RAD workflow, including development of a process map, identification of potential failure modes, description of the cause and effect, temporal occurrence, and team member involvement in each failure mode, and examination of existing safety controls. A risk probability number (RPN) was calculated for each failure mode. As necessary, workflow adjustments were then made to safeguard failure modes of significant RPN values. After workflow alterations, RPN numbers were again recomputed. A total of 72 potential failure modes were identified in the pre-FMEA STAT RAD workflow, of which 22 met the RPN threshold for clinical significance. Workflow adjustments included the addition of a team member checklist, changing simulation from megavoltage CT to kilovoltage CT, alteration of patient-specific quality assurance testing, and allocating increased time for critical workflow steps. After these modifications, only 1 failure mode maintained RPN significance; patient motion after alignment or during treatment. Performing the FMEA for the STAT RAD workflow before clinical implementation has significantly strengthened the safety and feasibility of STAT RAD. The FMEA proved a valuable evaluation tool, identifying potential problem areas so that we could create a safer workflow

  1. A virtual radiation therapy workflow training simulation

    International Nuclear Information System (INIS)

    Bridge, P.; Crowe, S.B.; Gibson, G.; Ellemor, N.J.; Hargrave, C.; Carmichael, M.

    2016-01-01

    Aim: Simulation forms an increasingly vital component of clinical skills development in a wide range of professional disciplines. Simulation of clinical techniques and equipment is designed to better prepare students for placement by providing an opportunity to learn technical skills in a “safe” academic environment. In radiotherapy training over the last decade or so this has predominantly comprised treatment planning software and small ancillary equipment such as mould room apparatus. Recent virtual reality developments have dramatically changed this approach. Innovative new simulation applications and file processing and interrogation software have helped to fill in the gaps to provide a streamlined virtual workflow solution. This paper outlines the innovations that have enabled this, along with an evaluation of the impact on students and educators. Method: Virtual reality software and workflow applications have been developed to enable the following steps of radiation therapy to be simulated in an academic environment: CT scanning using a 3D virtual CT scanner simulation; batch CT duplication; treatment planning; 3D plan evaluation using a virtual linear accelerator; quantitative plan assessment, patient setup with lasers; and image guided radiotherapy software. Results: Evaluation of the impact of the virtual reality workflow system highlighted substantial time saving for academic staff as well as positive feedback from students relating to preparation for clinical placements. Students valued practice in the “safe” environment and the opportunity to understand the clinical workflow ahead of clinical department experience. Conclusion: Simulation of most of the radiation therapy workflow and tasks is feasible using a raft of virtual reality simulation applications and supporting software. Benefits of this approach include time-saving, embedding of a case-study based approach, increased student confidence, and optimal use of the clinical environment

  2. Quantitative analysis of probabilistic BPMN workflows

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2012-01-01

    We present a framework for modelling and analysis of realworld business workflows. We present a formalised core subset of the Business Process Modelling and Notation (BPMN) and then proceed to extend this language with probabilistic nondeterministic branching and general-purpose reward annotations...... of events, reward-based properties and best- and worst- case scenarios. We develop a simple example of medical workflow and demonstrate the utility of this analysis in accurate provisioning of drug stocks. Finally, we suggest a path to building upon these techniques to cover the entire BPMN language, allow...

  3. Perti Net-Based Workflow Access Control Model

    Institute of Scientific and Technical Information of China (English)

    陈卓; 骆婷; 石磊; 洪帆

    2004-01-01

    Access control is an important protection mechanism for information systems. This paper shows how to make access control in workflow system. We give a workflow access control model (WACM) based on several current access control models. The model supports roles assignment and dynamic authorization. The paper defines the workflow using Petri net. It firstly gives the definition and description of the workflow, and then analyzes the architecture of the workflow access control model (WACM). Finally, an example of an e-commerce workflow access control model is discussed in detail.

  4. Real-Time Electronic Dashboard Technology and Its Use to Improve Pediatric Radiology Workflow.

    Science.gov (United States)

    Shailam, Randheer; Botwin, Ariel; Stout, Markus; Gee, Michael S

    The purpose of our study was to create a real-time electronic dashboard in the pediatric radiology reading room providing a visual display of updated information regarding scheduled and in-progress radiology examinations that could help radiologists to improve clinical workflow and efficiency. To accomplish this, a script was set up to automatically send real-time HL7 messages from the radiology information system (Epic Systems, Verona, WI) to an Iguana Interface engine, with relevant data regarding examinations stored in an SQL Server database for visual display on the dashboard. Implementation of an electronic dashboard in the reading room of a pediatric radiology academic practice has led to several improvements in clinical workflow, including decreasing the time interval for radiologist protocol entry for computed tomography or magnetic resonance imaging examinations as well as fewer telephone calls related to unprotocoled examinations. Other advantages include enhanced ability of radiologists to anticipate and attend to examinations requiring radiologist monitoring or scanning, as well as to work with technologists and operations managers to optimize scheduling in radiology resources. We foresee increased utilization of electronic dashboard technology in the future as a method to improve radiology workflow and quality of patient care. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Soundness of Timed-Arc Workflow Nets

    DEFF Research Database (Denmark)

    Mateo, Jose Antonio; Srba, Jiri; Sørensen, Mathias Grund

    2014-01-01

    , we demonstrate the usability of our theory on the case studies of a Brake System Control Unit used in aircraft certification, the MPEG2 encoding algorithm, and a blood transfusion workflow. The implementation of the algorithms is freely available as a part of the model checker TAPAAL....

  6. Distributed interoperable workflow support for electronic commerce

    NARCIS (Netherlands)

    Papazoglou, M.; Jeusfeld, M.A.; Weigand, H.; Jarke, M.

    1998-01-01

    Abstract. This paper describes a flexible distributed transactional workflow environment based on an extensible object-oriented framework built around class libraries, application programming interfaces, and shared services. The purpose of this environment is to support a range of EC-like business

  7. Using workflow for projects in higher education

    NARCIS (Netherlands)

    van der Veen, Johan (CTIT); Jones, Valerie M.; Collis, Betty

    2000-01-01

    The WWW is increasingly used as a medium to support education and training. A course at the University of Twente in which groups of students collaborate in the design and production of multimedia instructional materials has now been supported by a website since 1995. Workflow was integrated with

  8. Workflow Automation: A Collective Case Study

    Science.gov (United States)

    Harlan, Jennifer

    2013-01-01

    Knowledge management has proven to be a sustainable competitive advantage for many organizations. Knowledge management systems are abundant, with multiple functionalities. The literature reinforces the use of workflow automation with knowledge management systems to benefit organizations; however, it was not known if process automation yielded…

  9. On Secure Workflow Decentralisation on the Internet

    Directory of Open Access Journals (Sweden)

    Petteri Kaskenpalo

    2010-06-01

    Full Text Available Decentralised workflow management systems are a new research area, where most work to-date has focused on the system's overall architecture. As little attention has been given to the security aspects in such systems, we follow a security driven approach, and consider, from the perspective of available security building blocks, how security can be implemented and what new opportunities are presented when empowering the decentralised environment with modern distributed security protocols. Our research is motivated by a more general question of how to combine the positive enablers that email exchange enjoys, with the general benefits of workflow systems, and more specifically with the benefits that can be introduced in a decentralised environment. This aims to equip email users with a set of tools to manage the semantics of a message exchange, contents, participants and their roles in the exchange in an environment that provides inherent assurances of security and privacy. This work is based on a survey of contemporary distributed security protocols, and considers how these protocols could be used in implementing a distributed workflow management system with decentralised control . We review a set of these protocols, focusing on the required message sequences in reviewing the protocols, and discuss how these security protocols provide the foundations for implementing core control-flow, data, and resource patterns in a distributed workflow environment.

  10. Text mining for the biocuration workflow.

    Science.gov (United States)

    Hirschman, Lynette; Burns, Gully A P C; Krallinger, Martin; Arighi, Cecilia; Cohen, K Bretonnel; Valencia, Alfonso; Wu, Cathy H; Chatr-Aryamontri, Andrew; Dowell, Karen G; Huala, Eva; Lourenço, Anália; Nash, Robert; Veuthey, Anne-Lise; Wiegers, Thomas; Winter, Andrew G

    2012-01-01

    Molecular biology has become heavily dependent on biological knowledge encoded in expert curated biological databases. As the volume of biological literature increases, biocurators need help in keeping up with the literature; (semi-) automated aids for biocuration would seem to be an ideal application for natural language processing and text mining. However, to date, there have been few documented successes for improving biocuration throughput using text mining. Our initial investigations took place for the workshop on 'Text Mining for the BioCuration Workflow' at the third International Biocuration Conference (Berlin, 2009). We interviewed biocurators to obtain workflows from eight biological databases. This initial study revealed high-level commonalities, including (i) selection of documents for curation; (ii) indexing of documents with biologically relevant entities (e.g. genes); and (iii) detailed curation of specific relations (e.g. interactions); however, the detailed workflows also showed many variabilities. Following the workshop, we conducted a survey of biocurators. The survey identified biocurator priorities, including the handling of full text indexed with biological entities and support for the identification and prioritization of documents for curation. It also indicated that two-thirds of the biocuration teams had experimented with text mining and almost half were using text mining at that time. Analysis of our interviews and survey provide a set of requirements for the integration of text mining into the biocuration workflow. These can guide the identification of common needs across curated databases and encourage joint experimentation involving biocurators, text mining developers and the larger biomedical research community.

  11. Conventions and workflows for using Situs

    International Nuclear Information System (INIS)

    Wriggers, Willy

    2012-01-01

    Recent developments of the Situs software suite for multi-scale modeling are reviewed. Typical workflows and conventions encountered during processing of biophysical data from electron microscopy, tomography or small-angle X-ray scattering are described. Situs is a modular program package for the multi-scale modeling of atomic resolution structures and low-resolution biophysical data from electron microscopy, tomography or small-angle X-ray scattering. This article provides an overview of recent developments in the Situs package, with an emphasis on workflows and conventions that are important for practical applications. The modular design of the programs facilitates scripting in the bash shell that allows specific programs to be combined in creative ways that go beyond the original intent of the developers. Several scripting-enabled functionalities, such as flexible transformations of data type, the use of symmetry constraints or the creation of two-dimensional projection images, are described. The processing of low-resolution biophysical maps in such workflows follows not only first principles but often relies on implicit conventions. Situs conventions related to map formats, resolution, correlation functions and feature detection are reviewed and summarized. The compatibility of the Situs workflow with CCP4 conventions and programs is discussed

  12. Adaptive workflow simulation of emergency response

    NARCIS (Netherlands)

    Bruinsma, Guido Wybe Jan

    2010-01-01

    Recent incidents and major training exercises in and outside the Netherlands have persistently shown that not having or not sharing information during emergency response are major sources of emergency response inefficiency and error, and affect incident mitigation outcomes through workflow planning

  13. Scientific Workflows and the Sensor Web for Virtual Environmental Observatories

    Science.gov (United States)

    Simonis, I.; Vahed, A.

    2008-12-01

    interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.

  14. From Requirements via Colored Workflow Nets to an Implementation in Several Workflow Systems

    DEFF Research Database (Denmark)

    Mans, Ronnie S:; van der Aalst, Wil M.P.; Bakker, Piet J.M.

    2007-01-01

    care process of the Academic Medical Center (AMC) hospital is used as reference process. The process consists of hundreds of activities. These have been modeled and analyzed using an EUC and a CWN. Moreover, based on the CWN, the process has been implemented using four different workflow systems......Care organizations, such as hospitals, need to support complex and dynamic workflows. More- over, many disciplines are involved. This makes it important to avoid the typical disconnect between requirements and the actual implementation of the system. This paper proposes an approach where...... an Executable Use Case (EUC) and Colored Workflow Net (CWN) are used to close the gap between the given requirements specification and the realization of these requirements with the help of a workflow system. This paper describes a large case study where the diagnostic tra jectory of the gynaecological oncology...

  15. Decentralized Pricing in Minimum Cost Spanning Trees

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moulin, Hervé; Østerdal, Lars Peter

    In the minimum cost spanning tree model we consider decentralized pricing rules, i.e. rules that cover at least the ecient cost while the price charged to each user only depends upon his own connection costs. We de ne a canonical pricing rule and provide two axiomatic characterizations. First......, the canonical pricing rule is the smallest among those that improve upon the Stand Alone bound, and are either superadditive or piece-wise linear in connection costs. Our second, direct characterization relies on two simple properties highlighting the special role of the source cost....

  16. Macroeconomic aspects of decentralized electricity production

    International Nuclear Information System (INIS)

    Percebois, J.

    1991-01-01

    The development of decentralized electricity production should be viewed first and foremost as a means of adapting production resources to meet the needs of the users between 1995 and 1997. Consumer production and cogeneration are not, however, simply stopgap solutions operating on the fringe of electricity production. These methods serve to highlight a problem that has already been raised in the past: the real advantages and disadvantages of centralized systems managed by companies that exercise a virtual monopoly in either the public or private sector

  17. Automatic Decentralized Clustering for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Wen Chih-Yu

    2005-01-01

    Full Text Available We propose a decentralized algorithm for organizing an ad hoc sensor network into clusters. Each sensor uses a random waiting timer and local criteria to determine whether to form a new cluster or to join a current cluster. The algorithm operates without a centralized controller, it operates asynchronously, and does not require that the location of the sensors be known a priori. Simplified models are used to estimate the number of clusters formed, and the energy requirements of the algorithm are investigated. The performance of the algorithm is described analytically and via simulation.

  18. Workflows in bioinformatics: meta-analysis and prototype implementation of a workflow generator

    Directory of Open Access Journals (Sweden)

    Thoraval Samuel

    2005-04-01

    Full Text Available Abstract Background Computational methods for problem solving need to interleave information access and algorithm execution in a problem-specific workflow. The structures of these workflows are defined by a scaffold of syntactic, semantic and algebraic objects capable of representing them. Despite the proliferation of GUIs (Graphic User Interfaces in bioinformatics, only some of them provide workflow capabilities; surprisingly, no meta-analysis of workflow operators and components in bioinformatics has been reported. Results We present a set of syntactic components and algebraic operators capable of representing analytical workflows in bioinformatics. Iteration, recursion, the use of conditional statements, and management of suspend/resume tasks have traditionally been implemented on an ad hoc basis and hard-coded; by having these operators properly defined it is possible to use and parameterize them as generic re-usable components. To illustrate how these operations can be orchestrated, we present GPIPE, a prototype graphic pipeline generator for PISE that allows the definition of a pipeline, parameterization of its component methods, and storage of metadata in XML formats. This implementation goes beyond the macro capacities currently in PISE. As the entire analysis protocol is defined in XML, a complete bioinformatic experiment (linked sets of methods, parameters and results can be reproduced or shared among users. Availability: http://if-web1.imb.uq.edu.au/Pise/5.a/gpipe.html (interactive, ftp://ftp.pasteur.fr/pub/GenSoft/unix/misc/Pise/ (download. Conclusion From our meta-analysis we have identified syntactic structures and algebraic operators common to many workflows in bioinformatics. The workflow components and algebraic operators can be assimilated into re-usable software components. GPIPE, a prototype implementation of this framework, provides a GUI builder to facilitate the generation of workflows and integration of heterogeneous

  19. Performing Workflows in Pervasive Environments Based on Context Specifications

    OpenAIRE

    Xiping Liu; Jianxin Chen

    2010-01-01

    The workflow performance consists of the performance of activities and transitions between activities. Along with the fast development of varied computing devices, activities in workflows and transitions between activities could be performed in pervasive ways, whichcauses that the workflow performance need to migrate from traditional computing environments to pervasive environments. To perform workflows in pervasive environments needs to take account of the context information which affects b...

  20. Workflow Support for Advanced Grid-Enabled Computing

    OpenAIRE

    Xu, Fenglian; Eres, M.H.; Tao, Feng; Cox, Simon J.

    2004-01-01

    The Geodise project brings computer scientists and engineer's skills together to build up a service-oriented computing environmnet for engineers to perform complicated computations in a distributed system. The workflow tool is a front GUI to provide a full life cycle of workflow functions for Grid-enabled computing. The full life cycle of workflow functions have been enhanced based our initial research and development. The life cycle starts with a composition of a workflow, followed by an ins...

  1. Decentralization : Local Partnerships for Health Services in the ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    However, the national decentralization program is having a hard time getting on track. In the face of day-to-day difficulties Zenü Network, a nongovernmental organization, would like to make a contribution to this social project. The Network would like to demonstrate that civil society can work with decentralized government ...

  2. Decentralized Budgeting: Getting the Most Out of Disbursements of Funds.

    Science.gov (United States)

    Jefferson, Anne L.

    1995-01-01

    Decentralizing educational budgets allows the disbursement of funds aimed at maximizing student development. Three strategies for decentralizing budgets are program budgeting, which eliminates line-item budgeting and allows administrators to address questions regarding the relative value of educational programs; zero-based budgeting, which allows…

  3. A Review of Characteristics and Experiences of Decentralization of Education

    Science.gov (United States)

    Mwinjuma, Juma Saidi; Kadir, Suhaida bte Abd.; Hamzah, Azimi; Basri, Ramli

    2015-01-01

    This paper scrutinizes decentralization of education with reference to some countries around the world. We consider discussion on decentralization to be complex, critical and broad question in the contemporary education planning, administration and politics of education reforms. Even though the debate on and implementation of decentralization…

  4. Decentralization : Local Partnerships for Health Services in the ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Cameroon, like most other sub-Saharan African countries, has adopted laws devolving various responsibilities to local administrations. In the local political discourse, decentralization is seen as bringing essential services closer to the users, especially those in greatest need. However, the national decentralization program ...

  5. Decentralization: A panacea for functional education and national ...

    African Journals Online (AJOL)

    Decentralization of power from the federal government to state and local governments is the way to go, especially in the management of our education system. Education can be best delivered at the state and local government levels. Decentralization of educational management in Nigeria will encourage creativity and ...

  6. On Deciding How to Decide: To Centralize or Decentralize.

    Science.gov (United States)

    Chaffee, Ellen Earle

    Issues concerning whether to centralize or decentralize decision-making are addressed, with applications for colleges. Centralization/decentralization (C/D) must be analyzed with reference to a particular decision. Three components of C/D are locus of authority, breadth of participation, and relative contribution by the decision-maker's staff. C/D…

  7. Centralization Versus Decentralization: A Location Analysis Approach for Librarians.

    Science.gov (United States)

    Shishko, Robert; Raffel, Jeffrey

    One of the questions that seems to perplex many university and special librarians is whether to move in the direction of centralizing or decentralizing the library's collections and facilities. Presented is a theoretical approach, employing location theory, to the library centralization-decentralization question. Location theory allows the analyst…

  8. Decentralized energy studies: compendium of international studies and research

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, C.

    1980-03-01

    The purpose of the compendium is to provide information about research activities in decentralized energy systems to researchers, government officials, and interested citizens. The compendium lists and briefly describes a number of studies in other industrialized nations that involve decentralized energy systems. A contact person is given for each of the activities listed so that interested readers can obtain more information.

  9. Centralization vs. Decentralization: A Location Analysis Approach for Librarians

    Science.gov (United States)

    Raffel, Jeffrey; Shishko, Robert

    1972-01-01

    An application of location theory to the question of centralized versus decentralized library facilities for a university, with relevance for special libraries is presented. The analysis provides models for a single library, for two or more libraries, or for decentralized facilities. (6 references) (Author/NH)

  10. On Lifecycle Constraints of Artifact-Centric Workflows

    Science.gov (United States)

    Kucukoguz, Esra; Su, Jianwen

    Data plays a fundamental role in modeling and management of business processes and workflows. Among the recent "data-aware" workflow models, artifact-centric models are particularly interesting. (Business) artifacts are the key data entities that are used in workflows and can reflect both the business logic and the execution states of a running workflow. The notion of artifacts succinctly captures the fluidity aspect of data during workflow executions. However, much of the technical dimension concerning artifacts in workflows is not well understood. In this paper, we study a key concept of an artifact "lifecycle". In particular, we allow declarative specifications/constraints of artifact lifecycle in the spirit of DecSerFlow, and formulate the notion of lifecycle as the set of all possible paths an artifact can navigate through. We investigate two technical problems: (Compliance) does a given workflow (schema) contain only lifecycle allowed by a constraint? And (automated construction) from a given lifecycle specification (constraint), is it possible to construct a "compliant" workflow? The study is based on a new formal variant of artifact-centric workflow model called "ArtiNets" and two classes of lifecycle constraints named "regular" and "counting" constraints. We present a range of technical results concerning compliance and automated construction, including: (1) compliance is decidable when workflow is atomic or constraints are regular, (2) for each constraint, we can always construct a workflow that satisfies the constraint, and (3) sufficient conditions where atomic workflows can be constructed.

  11. WS-VLAM: A GT4 based workflow management system

    NARCIS (Netherlands)

    Wibisono, A.; Vasyunin, D.; Korkhov, V.; Zhao, Z.; Belloum, A.; de Laat, C.; Adriaans, P.; Hertzberger, B.

    2007-01-01

    Generic Grid middleware, e.g., Globus Toolkit 4 (GT4), provides basic services for scientific workflow management systems to discover, store and integrate workflow components. Using the state of the art Grid services can advance the functionality of workflow engine in orchestrating distributed Grid

  12. Optimal resource assignment in workflows for maximizing cooperation

    NARCIS (Netherlands)

    Kumar, Akhil; Dijkman, R.M.; Song, Minseok; Daniel, Fl.; Wang, J.; Weber, B.

    2013-01-01

    A workflow is a team process since many actors work on various tasks to complete an instance. Resource management in such workflows deals with assignment of tasks to workers or actors. In team formation, it is necessary to ensure that members of a team are compatible with each other. When a workflow

  13. Database Support for Workflow Management: The WIDE Project

    NARCIS (Netherlands)

    Grefen, P.W.P.J.; Pernici, B; Sánchez, G.; Unknown, [Unknown

    1999-01-01

    Database Support for Workflow Management: The WIDE Project presents the results of the ESPRIT WIDE project on advanced database support for workflow management. The book discusses the state of the art in combining database management and workflow management technology, especially in the areas of

  14. Decentralization and Participatory Rural Development: A Literature Review

    Directory of Open Access Journals (Sweden)

    Muhammad Shakil Ahmad

    2011-12-01

    Full Text Available Most of the developing nations are still struggling for efficient use of their resources. In order to overcome physical and administrative constraints of the development, it is necessary to transfer the power from the central government to local authorities. Distribution of power from improves the management of resources and community participation which is considered key to sustainable development. Advocates of decentralization argue that decentralized government is source to improve community participation in rural development. Decentralized government is considered more responsive towards local needs and development of poor peoples. There are many obstacles to expand the citizen participation in rural areas. There are many approaches for participatory development but all have to face the same challenges. Current paper highlights the literature about Decentralization and participatory rural development. Concept and modalities of Decentralization, dimensions of participation, types of rural participation and obstacles to participation are also the part of this paper.

  15. Fusion-supported decentralized nuclear energy system

    International Nuclear Information System (INIS)

    Jassby, D.L.

    1979-04-01

    A decentralized nuclear energy system is proposed comprising mass-produced pressurized water reactors in the size range 10 to 300 MW (thermal), to be used for the production of process heat, space heat, and electricity in applications where petroleum and natural gas are presently used. Special attention is given to maximizing the refueling interval with no interim batch shuffling in order to minimize fuel transport, reactor downtime, and opportunity for fissile diversion. These objectives demand a substantial fissile enrichment (7 to 15%). The preferred fissile fuel is U-233, which offers an order of magnitude savings in ore requirements (compared with U-235 fuel), and whose higher conversion ratio in thermal reactors serves to extend the period of useful reactivity and relieve demand on the fissile breeding plants (compared with Pu-239 fuel). Application of the neutral-beam-driven tokamak fusion-neutron source to a U-233 breeding pilot plant is examined. This scheme can be extended in part to a decentralized fusion energy system, wherein remotely located large fusion reactors supply excess tritium to a distributed system of relatively small nonbreeding D-T reactors

  16. Grid workflow job execution service 'Pilot'

    Science.gov (United States)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-12-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  17. Grid workflow job execution service 'Pilot'

    International Nuclear Information System (INIS)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-01-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  18. Workflow optimization beyond RIS and PACS

    International Nuclear Information System (INIS)

    Treitl, M.; Wirth, S.; Lucke, A.; Nissen-Meyer, S.; Trumm, C.; Rieger, J.; Pfeifer, K.-J.; Reiser, M.; Villain, S.

    2005-01-01

    Technological progress and the rising cost pressure on the healthcare system have led to a drastic change in the work environment of radiologists today. The pervasive demand for workflow optimization and increased efficiency of its activities raises the question of whether by employment of electronic systems, such as RIS and PACS, the potentials of digital technology are sufficiently used to fulfil this demand. This report describes the tasks and structures in radiology departments, which so far are only insufficiently supported by commercially available electronic systems but are nevertheless substantial. We developed and employed a web-based, integrated workplace system, which simplifies many daily tasks of departmental organization and administration apart from well-established tasks of documentation. Furthermore, we analyzed the effects exerted on departmental workflow by employment of this system for 3 years. (orig.) [de

  19. Designing Flexible E-Business Workflow Systems

    OpenAIRE

    Cătălin Silvestru; Codrin Nisioiu; Marinela Mircea; Bogdan Ghilic-Micu; Marian Stoica

    2010-01-01

    In today’s business environment organizations must cope with complex interactions between actors adapt fast to frequent market changes and be innovative. In this context, integrating knowledge with processes and Business Intelligenceis a major step towards improving organization agility. Therefore, traditional environments for workflow design have been adapted to answer the new business models and current requirements in the field of collaborative processes. This paper approaches the design o...

  20. Planning bioinformatics workflows using an expert system

    Science.gov (United States)

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  1. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  2. Routine digital pathology workflow: The Catania experience

    Directory of Open Access Journals (Sweden)

    Filippo Fraggetta

    2017-01-01

    Full Text Available Introduction: Successful implementation of whole slide imaging (WSI for routine clinical practice has been accomplished in only a few pathology laboratories worldwide. We report the transition to an effective and complete digital surgical pathology workflow in the pathology laboratory at Cannizzaro Hospital in Catania, Italy. Methods: All (100% permanent histopathology glass slides were digitized at ×20 using Aperio AT2 scanners. Compatible stain and scanning slide racks were employed to streamline operations. eSlide Manager software was bidirectionally interfaced with the anatomic pathology laboratory information system. Virtual slide trays connected to the two-dimensional (2D barcode tracking system allowed pathologists to confirm that they were correctly assigned slides and that all tissues on these glass slides were scanned. Results: Over 115,000 glass slides were digitized with a scan fail rate of around 1%. Drying glass slides before scanning minimized them sticking to scanner racks. Implementation required introduction of a 2D barcode tracking system and modification of histology workflow processes. Conclusion: Our experience indicates that effective adoption of WSI for primary diagnostic use was more dependent on optimizing preimaging variables and integration with the laboratory information system than on information technology infrastructure and ensuring pathologist buy-in. Implementation of digital pathology for routine practice not only leveraged the benefits of digital imaging but also creates an opportunity for establishing standardization of workflow processes in the pathology laboratory.

  3. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  4. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  5. Multi-core processing and scheduling performance in CMS

    International Nuclear Information System (INIS)

    Hernández, J M; Evans, D; Foulkes, S

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  6. Analysis for corruption and decentralization (Case study: earlier decentralization era in Indonesia)

    OpenAIRE

    Haryanto, Joko Tri; Astuti S.A., Esther Sri

    2017-01-01

    In many countries, relationship between decentralization of government activities and the extent of rent extraction by private parties is an important element in the recent debate on institutional design. The topic of corruption was actively, openly and debated in Indonesia by government, its development partners, and a broadly based group of political and civil society leaders are engaged in meetings and exchange on a daily basis. In the ongoing debate on corruption a lot of attention is pai...

  7. Energy and air emission implications of a decentralized wastewater system

    International Nuclear Information System (INIS)

    Shehabi, Arman; Stokes, Jennifer R; Horvath, Arpad

    2012-01-01

    Both centralized and decentralized wastewater systems have distinct engineering, financial and societal benefits. This paper presents a framework for analyzing the environmental effects of decentralized wastewater systems and an evaluation of the environmental impacts associated with two currently operating systems in California, one centralized and one decentralized. A comparison of energy use, greenhouse gas emissions and criteria air pollutants from the systems shows that the scale economies of the centralized plant help lower the environmental burden to less than a fifth of that of the decentralized utility for the same volume treated. The energy and emission burdens of the decentralized plant are reduced when accounting for high-yield wastewater reuse if it supplants an energy-intensive water supply like a desalination one. The centralized facility also reduces greenhouse gases by flaring methane generated during the treatment process, while methane is directly emitted from the decentralized system. The results are compelling enough to indicate that the life-cycle environmental impacts of decentralized designs should be carefully evaluated as part of the design process. (letter)

  8. Decentralized Coordinated Control Strategy of Islanded Microgrids

    DEFF Research Database (Denmark)

    Wu, Dan

    as grid voltage/frequency regulation. In order to enhance the reliability of overall islanded Microgrid operation, basic functions of coordinated control which taking into account the state of charge (SoC) limitation and power availability of renewable energy sources is implemented in a distributed level...... control strategies in this thesis, in order to promote the decentralization of the overall system. Especially the consensus algorithm based secondary level is investigated in the thesis in order to simplify the communication configuration which only flood information through the neighboring units......Facing the challenges brought by the traditional large power system concerning the environmental and economic issues, along recent years distributed generation is considered as an alternative solution to provide clean energy in a local manner. In this context, Microgrid which performing as a local...

  9. Vanuatu, the country of rural decentralized electrification

    International Nuclear Information System (INIS)

    Maigne, Y.; Molli, L.

    1998-01-01

    The status of decentralized rural electrification in Vanuatu was presented. Vanuatu is a sparsely populated rural country in the south Pacific. The country includes 92 populated islands spread over 1,000 kilometers in the south Pacific, halfway between Fiji and Australia. The low population density and the tremendous distances between the different islands have made local electrical networks a necessity in Vanuatu. Apart from the two principal urban centres, Vanuatu does not have a centralized electrical distribution network. In the early 1990s the government initiated a program to provide independent power sources to the isolated communities. Photovoltaic cells are used to power most telecommunications services. Solar cells are also used to provide power to important community buildings such as the schools or nursing stations on the remote islands. Two small hydroelectric generating stations of 600 kW were also installed with the help of the German government

  10. Fundamentals of the administrative decentralization process

    Directory of Open Access Journals (Sweden)

    Mihaela Lupăncescu

    2017-12-01

    Full Text Available Public administration, as an activity carried out by the administrative authorities, can be achieved through several forms of organization. In this sense, centralization, deconcentration and decentralization, together with its corollary, local autonomy, constitute in organizational regimes of an administrative nature, more or less democratic, and with characteristics that vary according to the degree of dependence between the authorities of the public administration institutions at the central level and local public administration authorities.There is no single form of organization that incorporates the characteristics of a particular regime. The complex expectations of modern society have led to the blending of features of different forms of organization in order to create a balance of activity within the public administration, in order to exercise the functions of executive power for the benefit of citizens, not by conferring unlimited autonomy but by considering the fundamental principle of legality.

  11. Decentralized energy supply on the liberalized market

    International Nuclear Information System (INIS)

    Pauli, H.

    1999-01-01

    Starting in 2001, the electricity market is to be progressively liberalized. The process will be completed by the year 2006. What role will decentralized power generation using combined cycle power plants play on a liberalized market ? The background conditions are essentially favourable: both the new energy act, which has been in force since 1 January 1999, and the planned energy levy suggest that this technology will become increasingly widespread. In addition, the price trend for combined cycle plants components together with low energy costs are having a favourable impact. On the other hand, great uncertainty is being created by the process of liberalization and the current flood of investments in power generation. However, electricity supply is unlikely to be in surplus for long in a context of sustained economic growth. (author)

  12. Centralized vs decentralized lunar power system study

    Science.gov (United States)

    Metcalf, Kenneth; Harty, Richard B.; Perronne, Gerald E.

    1991-09-01

    Three power-system options are considered with respect to utilization on a lunar base: the fully centralized option, the fully decentralized option, and a hybrid comprising features of the first two options. Power source, power conditioning, and power transmission are considered separately, and each architecture option is examined with ac and dc distribution, high and low voltage transmission, and buried and suspended cables. Assessments are made on the basis of mass, technological complexity, cost, reliability, and installation complexity, however, a preferred power-system architecture is not proposed. Preferred options include transmission based on ac, transmission voltages of 2000-7000 V with buried high-voltage lines and suspended low-voltage lines. Assessments of the total cost associated with the installations are required to determine the most suitable power system.

  13. Influence of cardiac decentralization on cardioprotection.

    Directory of Open Access Journals (Sweden)

    John G Kingma

    Full Text Available The role of cardiac nerves on development of myocardial tissue injury after acute coronary occlusion remains controversial. We investigated whether acute cardiac decentralization (surgical modulates coronary flow reserve and myocardial protection in preconditioned dogs subject to ischemia-reperfusion. Experiments were conducted on four groups of anesthetised, open-chest dogs (n = 32: 1- controls (CTR, intact cardiac nerves, 2- ischemic preconditioning (PC; 4 cycles of 5-min IR, 3- cardiac decentralization (CD and 4- CD+PC; all dogs underwent 60-min coronary occlusion and 180-min reperfusion. Coronary blood flow and reactive hyperemic responses were assessed using a blood volume flow probe. Infarct size (tetrazolium staining was related to anatomic area at risk and coronary collateral blood flow (microspheres in the anatomic area at risk. Post-ischemic reactive hyperemia and repayment-to-debt ratio responses were significantly reduced for all experimental groups; however, arterial perfusion pressure was not affected. Infarct size was reduced in CD dogs (18.6 ± 4.3; p = 0.001, data are mean ± 1 SD compared to 25.2 ± 5.5% in CTR dogs and was less in PC dogs as expected (13.5 ± 3.2 vs. 25.2 ± 5.5%; p = 0.001; after acute CD, PC protection was conserved (11.6 ± 3.4 vs. 18.6 ± 4.3%; p = 0.02. In conclusion, our findings provide strong evidence that myocardial protection against ischemic injury can be preserved independent of extrinsic cardiac nerve inputs.

  14. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  15. Responsiveness and flexibility in a Decentralized Supply Chain

    DEFF Research Database (Denmark)

    Petersen, Kristian Rasmus; Bilberg, Arne; Hadar, Ronen

    Today’s supply chains are not capable of managing the instabilities that is the case in the market. Instead, there is a need to develop supply chains that are capable of adapting to changes. Through a case study of LEGO, the authors suggest a possible solution: a decentralized supply chain serving...... independent and self-sufficient local factories. The decentralized supply chain is provided with materials, parts and pre-assembled elements from local suppliers and supplies the local market in return. Keywords: Decentralize, Responsiveness, Flexibility...

  16. Analysis and design of robust decentralized controllers for nonlinear systems

    Energy Technology Data Exchange (ETDEWEB)

    Schoenwald, D.A.

    1993-07-01

    Decentralized control strategies for nonlinear systems are achieved via feedback linearization techniques. New results on optimization and parameter robustness of non-linear systems are also developed. In addition, parametric uncertainty in large-scale systems is handled by sensitivity analysis and optimal control methods in a completely decentralized framework. This idea is applied to alleviate uncertainty in friction parameters for the gimbal joints on Space Station Freedom. As an example of decentralized nonlinear control, singular perturbation methods and distributed vibration damping are merged into a control strategy for a two-link flexible manipulator.

  17. From Requirements via Colored Workflow Nets to an Implementation in Several Workflow Systems

    DEFF Research Database (Denmark)

    Mans, Ronny S.; van der Aalst, Willibrordus Martinus Pancratius; Molemann, A.J.

    2007-01-01

    Care organizations, such as hospitals, need to support complex and dynamic workflows. More- over, many disciplines are involved. This makes it important to avoid the typical disconnect between requirements and the actual implementation of the system. This paper proposes an approach where an Execu......Care organizations, such as hospitals, need to support complex and dynamic workflows. More- over, many disciplines are involved. This makes it important to avoid the typical disconnect between requirements and the actual implementation of the system. This paper proposes an approach where...... an Executable Use Case (EUC) and Colored Care organizations, such as hospitals, need to support complex and dynamic workflows. Moreover, many disciplines are involved. This makes it important to avoid the typical disconnect between requirements and the actual implementation of the system. This paper proposes...

  18. Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.

    Science.gov (United States)

    Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa

    2012-05-04

    Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a

  19. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Directory of Open Access Journals (Sweden)

    Abouelhoda Mohamed

    2012-05-01

    Full Text Available Abstract Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub- workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and

  20. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Science.gov (United States)

    2012-01-01

    Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system

  1. Immunization Schedules for Adults

    Science.gov (United States)

    ... ACIP Vaccination Recommendations Why Immunize? Vaccines: The Basics Immunization Schedule for Adults (19 Years of Age and ... diseases that can be prevented by vaccines . 2018 Immunization Schedule Recommended Vaccinations for Adults by Age and ...

  2. Instant Childhood Immunization Schedule

    Science.gov (United States)

    ... Recommendations Why Immunize? Vaccines: The Basics Instant Childhood Immunization Schedule Recommend on Facebook Tweet Share Compartir Get ... date. See Disclaimer for additional details. Based on Immunization Schedule for Children 0 through 6 Years of ...

  3. Concurrent processes scheduling with scarce resources in small and medium enterprises

    Institute of Scientific and Technical Information of China (English)

    马嵩华

    2016-01-01

    Scarce resources , precedence and non-determined time-lag are three constraints commonly found in small and medium manufacturing enterprises (SMEs), which are deemed to block the ap-plication of workflow management system ( WfMS ) .To tackle this problem , a workflow scheduling approach is proposed based on timing workflow net (TWF-net) and genetic algorithm (GA).The workflow is modelled in a form of TWF-net in favour of process simulation and resource conflict checking .After simplifying and reconstructing the set of workflow instance , the conflict resolution problem is transformed into a resource-constrained project scheduling problem ( RCPSP ) , which could be efficiently solved by a heuristic method , such as GA.Finally, problems of various sizes are utilized to test the performance of the proposed algorithm and to compare it with first-come-first-served ( FCFS) strategy.The evaluation demonstrates that the proposed method is an overwhelming and effective approach for scheduling the concurrent processes with precedence and resource con -straints .

  4. Web Publishing Schedule

    Science.gov (United States)

    Section 207(f)(2) of the E-Gov Act requires federal agencies to develop an inventory and establish a schedule of information to be published on their Web sites, make those schedules available for public comment. To post the schedules on the web site.

  5. Preemptive scheduling with rejection

    NARCIS (Netherlands)

    Hoogeveen, H.; Skutella, M.; Woeginger, Gerhard

    2003-01-01

    We consider the problem of preemptively scheduling a set of n jobs on m (identical, uniformly related, or unrelated) parallel machines. The scheduler may reject a subset of the jobs and thereby incur job-dependent penalties for each rejected job, and he must construct a schedule for the remaining

  6. Preemptive scheduling with rejection

    NARCIS (Netherlands)

    Hoogeveen, J.A.; Skutella, M.; Woeginger, G.J.; Paterson, M.

    2000-01-01

    We consider the problem of preemptively scheduling a set of n jobs on m (identical, uniformly related, or unrelated) parallel machines. The scheduler may reject a subset of the jobs and thereby incur job-dependent penalties for each rejected job, and he must construct a schedule for the remaining

  7. Outage scheduling and implementation

    International Nuclear Information System (INIS)

    Allison, J.E.; Segall, P.; Smith, R.R.

    1986-01-01

    Successful preparation and implementation of an outage schedule and completion of scheduled and emergent work within an identified critical path time frame is a result of careful coordination by Operations, Work Control, Maintenance, Engineering, Planning and Administration and others. At the Fast Flux Test Facility (FFTF) careful planning has been responsible for meeting all scheduled outage critical paths

  8. Scheduling with Time Lags

    NARCIS (Netherlands)

    X. Zhang (Xiandong)

    2010-01-01

    textabstractScheduling is essential when activities need to be allocated to scarce resources over time. Motivated by the problem of scheduling barges along container terminals in the Port of Rotterdam, this thesis designs and analyzes algorithms for various on-line and off-line scheduling problems

  9. Text mining for the biocuration workflow

    Science.gov (United States)

    Hirschman, Lynette; Burns, Gully A. P. C; Krallinger, Martin; Arighi, Cecilia; Cohen, K. Bretonnel; Valencia, Alfonso; Wu, Cathy H.; Chatr-Aryamontri, Andrew; Dowell, Karen G.; Huala, Eva; Lourenço, Anália; Nash, Robert; Veuthey, Anne-Lise; Wiegers, Thomas; Winter, Andrew G.

    2012-01-01

    Molecular biology has become heavily dependent on biological knowledge encoded in expert curated biological databases. As the volume of biological literature increases, biocurators need help in keeping up with the literature; (semi-) automated aids for biocuration would seem to be an ideal application for natural language processing and text mining. However, to date, there have been few documented successes for improving biocuration throughput using text mining. Our initial investigations took place for the workshop on ‘Text Mining for the BioCuration Workflow’ at the third International Biocuration Conference (Berlin, 2009). We interviewed biocurators to obtain workflows from eight biological databases. This initial study revealed high-level commonalities, including (i) selection of documents for curation; (ii) indexing of documents with biologically relevant entities (e.g. genes); and (iii) detailed curation of specific relations (e.g. interactions); however, the detailed workflows also showed many variabilities. Following the workshop, we conducted a survey of biocurators. The survey identified biocurator priorities, including the handling of full text indexed with biological entities and support for the identification and prioritization of documents for curation. It also indicated that two-thirds of the biocuration teams had experimented with text mining and almost half were using text mining at that time. Analysis of our interviews and survey provide a set of requirements for the integration of text mining into the biocuration workflow. These can guide the identification of common needs across curated databases and encourage joint experimentation involving biocurators, text mining developers and the larger biomedical research community. PMID:22513129

  10. Engaging Social Capital for Decentralized Urban Stormwater Management

    Science.gov (United States)

    Decentralized approaches to urban stormwater management, whereby installations of green infrastructure (e.g., rain gardens, bioswales, and constructed wetlands) are dispersed throughout a management area, are cost-effective solutions with co-benefits beyond water abatement. Inste...

  11. Electrical Load Survey and Forecast for a Decentralized Hybrid ...

    African Journals Online (AJOL)

    Electrical Load Survey and Forecast for a Decentralized Hybrid Power System at Elebu, Kwara State, Nigeria. ... Nigerian Journal of Technology ... The paper reports the results of electrical load demand and forecast for Elebu rural community ...

  12. Centralized Control/Decentralized Execution: A Valid Tenet of Airpower

    National Research Council Canada - National Science Library

    Santicola, Henry J

    2005-01-01

    ...) and Effects-Based Operations (EBO). This paper examines the history of the concept of centralized control/decentralized execution from the advent of modern warfare through Operation Enduring Freedom...

  13. Decentralization, Interdependence and Performance Measurement System Design : Sequences and Priorities

    NARCIS (Netherlands)

    Abernethy, M.; Bouwens, J.F.M.G.; van Lent, L.A.G.M.

    2001-01-01

    We investigate the determinants of decentralization and performance measurement choices in multidivisional firms.We extend the research on the economics of organizational design choices by examining the impact of two important determinants of those choices, namely, subunit interdependencies and

  14. Decentralized control of multi-agent aerial transportation system

    KAUST Repository

    Toumi, Noureddine

    2017-01-01

    and Landing aircraft (VTOL) transportation system. We develop a decentralized method. The advantage of such a solution is that it can provide better maneuverability and lifting capabilities compared to existing systems. First, we consider a cooperative group

  15. Decentralized Consistent Network Updates in SDN with ez-Segway

    KAUST Repository

    Nguyen, Thanh Dang; Chiesa, Marco; Canini, Marco

    2017-01-01

    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and black-holes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes

  16. Decentralized Planning for Pre-Conflict and Post-Conflict ...

    African Journals Online (AJOL)

    Decentralized Planning for Pre-Conflict and Post-Conflict Management in the Bawku Municipal ... institutional arrangements for conflict monitoring and evaluation. Such processes are 'sine qua non' to pre-conflict and post-conflict prevention.

  17. 8-8-08 International Conference on Decentralization, local power ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    hallas

    2008-08-08

    Aug 8, 2008 ... The International Development Research Centre (IDRC) in collaboration ... To set an agenda on decentralization/ local governance that ... researchers, representatives of multilateral and bilateral agencies, media and others.

  18. Remembering the Future of Centralized Control-Decentralized Execution

    National Research Council Canada - National Science Library

    Sheets, Patrick

    2003-01-01

    ... concepts which should drive system development. To realize the significance of the USAF C2 tenet of "centralized control-decentralized execution," one must understand how C2 is executed, in contingency theaters of operation...

  19. Papers by the Decentralized Wastewater Management MOU Partnership

    Science.gov (United States)

    Four position papers for state, local, and tribal government officials and interested stakeholders. These papers include information on the uses and benefits of decentralized wastewater treatment and examples of its effective use.

  20. Analysis of power and frequency control requirements in view of increased decentralized production and market liberalization

    International Nuclear Information System (INIS)

    Roffel, B.; Boer, W.W. de

    2003-01-01

    This paper presents a systematic approach of the analysis of the minimum control requirements that are imposed on power producing units in the Netherlands, especially in the case when decentralized production increases. Also some effects of the liberalization on the control behavior are analyzed. First an overview is given of the amount and type of power production in the Netherlands, followed by a review of the control requirements. Next models are described, including a simplified model for the UCTE power system. The model was tested against frequency and power measurements after failure of a 558 MW production unit in the Netherlands. Agreement between measurements and model predictions proved to be good. The model was subsequently used to analyze the primary and secondary control requirements and the impact of an increase in decentralized power production on the fault restoration capabilities of the power system. Since the latter production units are not actively participating in primary and secondary control, fault restoration takes longer and becomes unacceptable when only 35% of the power producing units participate in secondary control. Finally, the model was used to study the impact of deregulation, especially the effect of 'block scheduling', on additional control actions of the secondary control. (Author)

  1. Communication network for decentralized remote tele-science during the Spacelab mission IML-2

    Science.gov (United States)

    Christ, Uwe; Schulz, Klaus-Juergen; Incollingo, Marco

    1994-01-01

    The ESA communication network for decentralized remote telescience during the Spacelab mission IML-2, called Interconnection Ground Subnetwork (IGS), provided data, voice conferencing, video distribution/conferencing and high rate data services to 5 remote user centers in Europe. The combination of services allowed the experimenters to interact with their experiments as they would normally do from the Payload Operations Control Center (POCC) at MSFC. In addition, to enhance their science results, they were able to make use of reference facilities and computing resources in their home laboratory, which typically are not available in the POCC. Characteristics of the IML-2 communications implementation were the adaptation to the different user needs based on modular service capabilities of IGS and the cost optimization for the connectivity. This was achieved by using a combination of traditional leased lines, satellite based VSAT connectivity and N-ISDN according to the simulation and mission schedule for each remote site. The central management system of IGS allows minimization of staffing and the involvement of communications personnel at the remote sites. The successful operation of IGS for IML-2 as a precursor network for the Columbus Orbital Facility (COF) has proven the concept for communications to support the operation of the COF decentralized scenario.

  2. Electronic resource management systems a workflow approach

    CERN Document Server

    Anderson, Elsa K

    2014-01-01

    To get to the bottom of a successful approach to Electronic Resource Management (ERM), Anderson interviewed staff at 11 institutions about their ERM implementations. Among her conclusions, presented in this issue of Library Technology Reports, is that grasping the intricacies of your workflow-analyzing each step to reveal the gaps and problems-at the beginning is crucial to selecting and implementing an ERM. Whether the system will be used to fill a gap, aggregate critical data, or replace a tedious manual process, the best solution for your library depends on factors such as your current soft

  3. Evolutionary optimization of production materials workflow processes

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Hansen, Zaza Nadja Lee; Jacobsen, Peter

    2014-01-01

    We present an evolutionary optimisation technique for stochastic production processes, which is able to find improved production materials workflow processes with respect to arbitrary combinations of numerical quantities associated with the production process. Working from a core fragment...... of the BPMN language, we employ an evolutionary algorithm where stochastic model checking is used as a fitness function to determine the degree of improvement of candidate processes derived from the original process through mutation and cross-over operations. We illustrate this technique using a case study...

  4. Reenginering of the i4 workflow engine

    OpenAIRE

    Likar, Tilen

    2013-01-01

    I4 is an enterprise resource planning system which allows you to manage business processes. Due to increasing demands for managing complex processes and adjusting those processes to global standards, a renewal of a part of the system was required. In this thesis we faced the reengineering of the workflow engine, and corresponding data model. We designed a business process diagram in Bizagi Porcess Modeler. The import to i4 and the export from i4 was developed on XPDL file exported from the mo...

  5. CMS data and workflow management system

    CERN Document Server

    Fanfani, A; Bacchi, W; Codispoti, G; De Filippis, N; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Silvestris, L; Calzolari, F; Sarkar, S; Spiga, D; Cinquili, M; Lacaprara, S; Biasotto, M; Farina, F; Merlo, M; Belforte, S; Kavka, C; Sala, L; Harvey, J; Hufnagel, D; Fanzago, F; Corvo, M; Magini, N; Rehn, J; Toteva, Z; Feichtinger, D; Tuura, L; Eulisse, G; Bockelman, B; Lundstedt, C; Egeland, R; Evans, D; Mason, D; Gutsche, O; Sexton-Kennedy, L; Dagenhart, D W; Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Fisk, I; McBride, P; Bauerdick, L; Bakken, J; Rossman, P; Wicklund, E; Wu, Y; Jones, C; Kuznetsov, V; Riley, D; Dolgert, A; van Lingen, F; Narsky, I; Paus, C; Klute, M; Gomez-Ceballos, G; Piedra-Gomez, J; Miller, M; Mohapatra, A; Lazaridis, C; Bradley, D; Elmer, P; Wildish, T; Wuerthwein, F; Letts, J; Bourilkov, D; Kim, B; Smith, P; Hernandez, J M; Caballero, J; Delgado, A; Flix, J; Cabrillo-Bartolome, I; Kasemann, M; Flossdorf, A; Stadie, H; Kreuzer, P; Khomitch, A; Hof, C; Zeidler, C; Kalini, S; Trunov, A; Saout, C; Felzmann, U; Metson, S; Newbold, D; Geddes, N; Brew, C; Jackson, J; Wakefield, S; De Weirdt, S; Adler, V; Maes, J; Van Mulders, P; Villella, I; Hammad, G; Pukhaeva, N; Kurca, T; Semneniouk, I; Guan, W; Lajas, J A; Teodoro, D; Gregores, E; Baquero, M; Shehzad, A; Kadastik, M; Kodolova, O; Chao, Y; Ming Kuo, C; Filippidis, C; Walzel, G; Han, D; Kalinowski, A; Giro de Almeida, N M; Panyam, N

    2008-01-01

    CMS expects to manage many tens of peta bytes of data to be distributed over several computing centers around the world. The CMS distributed computing and analysis model is designed to serve, process and archive the large number of events that will be generated when the CMS detector starts taking data. The underlying concepts and the overall architecture of the CMS data and workflow management system will be presented. In addition the experience in using the system for MC production, initial detector commissioning activities and data analysis will be summarized.

  6. DECENTRALIZATION IN THE SYSTEM OF NATIONAL ECONOMY MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Stepaniuk Nataliia

    2018-03-01

    Full Text Available Introduction. Article deals with the investigation of the theoretical approaches to the notion of decentralization in the system of management of the national economy. Purpose. It has been found that for the effective functioning of the state it is necessary to achieve a rational relationship between centralization and decentralization, change the role, responsibility and powers for local self-government and executive authority. Results. t is substantiated that most of the scientific works are devoted to the study of the issue of decentralization of power, the implementation of reform of public finances, the transfer of power to the place as a guarantee of the development of the national economy. It is emphasized that the main idea of decentralization is to transfer competence to local government to address local needs issues. Consequently, decentralization is closely linked to the organization of public administration, promotes the building of effective relations between state authorities and local government. The main advantages of decentralization are: simplified management on the local area, establishing closer connection with civil society, increasing transparency of managerial decisions and raising the level of responsibility to the territorial community. Considered organizational and legal aspects of introduction of decentralization in Ukraine. It is noted that the course on decentralization outlines both prospects and implementation problems. Among the main risks of decentralization are the inconsistencies of the development of separate territorial units and strategic goals, the loss of state mobility, reduction of workplaces of the state apparatus, risks of complication of coordination between levels of management. Conclusions. It has been determined that for efficiency and effectiveness of the reform decentralization principles are necessary for wide introduction in the administrative, political, budgetary, financial and social spheres

  7. Corruption and government spending : The role of decentralization

    OpenAIRE

    Korneliussen, Kristine

    2009-01-01

    This thesis points to a possible weakness of the empirical literature on corruption and government spending. That corruption affects the composition of government spending, and in particular that it affects education and health spending adversely, seems to be empirically well established. However, there exist additional literature closely related to corruption and government spending, treating(i) a relationship between corruption and decentralization, and (ii) a relationship between decentral...

  8. Emergent Semantics Interoperability in Large-Scale Decentralized Information Systems

    CERN Document Server

    Cudré-Mauroux, Philippe

    2008-01-01

    Peer-to-peer systems are evolving with new information-system architectures, leading to the idea that the principles of decentralization and self-organization will offer new approaches in informatics, especially for systems that scale with the number of users or for which central authorities do not prevail. This book describes a new way of building global agreements (semantic interoperability) based only on decentralized, self-organizing interactions.

  9. Decentralization and Distribution Primary Education Access in Indonesia 2014

    OpenAIRE

    Benita, Novinaz

    2016-01-01

    This paper examines decentralisation and distribution of access to primary school in Indonesia. Data come from Indonesia National Socio Economic Survey 2014, and statistic reports from Ministry of education, Ministry Of Finance, and General Election Commision. Descriptive statistic is used to describe spatial distribution of decentralization in primary education system and distribution of primary education access. The results show there are districts disparities in decentralization of primary...

  10. Jealousy Graphs: Structure and Complexity of Decentralized Stable Matching

    Science.gov (United States)

    2013-01-01

    REPORT Jealousy Graphs: Structure and Complexity of Decentralized Stable Matching 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The stable matching...Franceschetti 858-822-2284 3. DATES COVERED (From - To) Standard Form 298 (Rev 8/98) Prescribed by ANSI Std. Z39.18 - Jealousy Graphs: Structure and...market. Using this structure, we are able to provide a ner analysis of the complexity of a subclass of decentralized matching markets. Jealousy

  11. Computational State Transfer: An Architectural Style for Decentralized Systems

    OpenAIRE

    Gorlick, Michael Martin

    2016-01-01

    A decentralized system is a distributed system that operates under multiple, distinct spheres of authority in which collaboration among the principals is characterized by mutual distrust. Now commonplace, decentralized systems appear in a number of disparate domains: commerce, logistics, medicine, software development, manufacturing, and financial trading to name but a few. These systems of systems face two overlapping demands: security and safety to protect against errors, omissions and thre...

  12. Corruption, accountability, and decentralization: theory and evidence from Mexico

    OpenAIRE

    Goodspeed, Timothy J.

    2011-01-01

    One of the fundamental tenets of fiscal federalism is that, absent various sorts of externalities, decentralized governments that rely on own-source revenues should be more fiscally efficient than decentralized governments that rely on grant financing. The argument relies in part on the idea that sub-national governments, being closer to the people, are more accountable to its citizens. Accountability to citizens is also important in understanding the presence of corruption in government. Thi...

  13. Decentralized investment management: evidence from the pension fund industry

    OpenAIRE

    Blake, David; Timmermann, Allan; Tonks, Ian; Wermers, Russ

    2010-01-01

    The past few decades have seen amajor shift from centralized to decentralized investment management by pension fund sponsors, despite the increased coordination problems that this brings. Using a unique, proprietary dataset of pension sponsors and managers, we identify two secular decentralization trends: sponsors switched (i) from generalist (balanced) to specialist managers across asset classes and (ii) from single to multiple competing managers within each asset class. We study the effe...

  14. Wildfire: distributed, Grid-enabled workflow construction and execution

    Directory of Open Access Journals (Sweden)

    Issac Praveen

    2005-03-01

    Full Text Available Abstract Background We observe two trends in bioinformatics: (i analyses are increasing in complexity, often requiring several applications to be run as a workflow; and (ii multiple CPU clusters and Grids are available to more scientists. The traditional solution to the problem of running workflows across multiple CPUs required programming, often in a scripting language such as perl. Programming places such solutions beyond the reach of many bioinformatics consumers. Results We present Wildfire, a graphical user interface for constructing and running workflows. Wildfire borrows user interface features from Jemboss and adds a drag-and-drop interface allowing the user to compose EMBOSS (and other programs into workflows. For execution, Wildfire uses GEL, the underlying workflow execution engine, which can exploit available parallelism on multiple CPU machines including Beowulf-class clusters and Grids. Conclusion Wildfire simplifies the tasks of constructing and executing bioinformatics workflows.

  15. Subsidiarity in Principle: Decentralization of Water Resources Management

    Directory of Open Access Journals (Sweden)

    Ryan Stoa

    2014-05-01

    Full Text Available The subsidiarity principle of water resources management suggests that water management and service delivery should take place at the lowest appropriate governance level. The principle is attractive for several reasons, primarily because: 1 the governance level can be reduced to reflect environmental characteristics, such as the hydrological borders of a watershed that would otherwise cross administrative boundaries; 2 decentralization promotes community and stakeholder engagement when decision-making is localized; 3 inefficiencies are reduced by eliminating reliance on central government bureaucracies and budgetary constraints; and 4 laws and institutions can be adapted to reflect localized conditions at a scale where integrated natural resources management and climate change adaptation is more focused. Accordingly, the principle of subsidiarity has been welcomed by many states committed to decentralized governance, integrated water resources management, and/or civic participation. However, applications of decentralization have not been uniform, and in some cases have produced frustrating outcomes for states and water resources. Successful decentralization strategies are heavily dependent on dedicated financial resources and human resource capacity. This article explores the nexus between the principle of subsidiarity and the enabling environment, in the hope of articulating factors likely to contribute to, or detract from, the success of decentralized water resources management. Case studies from Haiti, Rwanda, and the United States’ Florida Water Management Districts provide examples of the varied stages of decentralization.

  16. FISCAL DECENTRALIZATION IN ALBANIA: EFFECTS OF TERRITORIAL AND ADMINISTRATIVE REFORM

    Directory of Open Access Journals (Sweden)

    Mariola KAPIDANI

    2015-12-01

    Full Text Available The principle of decentralization is a fundamental principle for the establishment and operation of local government. It refers to the process of redistributing the authority and responsibility for certain functions from central government to local government units. In many countries, particularly in developing countries, fiscal decentralization and local governance issues are addressed as highly important to the economic development. According to Stigler (1957, fiscal decentralization brings government closer to the people and a representative government works best when it is closer to the people. Albania is still undergoing the process of decentralization in all aspects: political, economic, fiscal and administrative. Decentralization process is essential to sustainable economic growth and efficient allocation of resources to meet the needs of citizens. Albania has a fragmented system of local government with a very large number of local government units that have neither sufficient fiscal or human capacity to provide public services at a reasonable level (World Bank. However, recent administrative and territorial reform is expected to have a significant impact in many issues related to local autonomy and revenue management. This paper is focused on the progress of fiscal decentralization process in Albania, stating key issues and ongoing challenges for an improved system. The purpose of this study is to analyze the effects of recent territorial reform, identifying problems and opportunities to be addressed in the future.

  17. Rethinking Decentralization in Education in terms of Administrative Problems

    Directory of Open Access Journals (Sweden)

    Vasiliki Papadopoulou

    2013-11-01

    Full Text Available The general purpose of this study is to thoroughly examine decentralization in education according to the literature and previous research, and to discuss the applicability of educational decentralization practices in Turkey. The literature was reviewed for the study and findings reported. It has been observed that decentralization in education practices were realized in many countries after the 1980’s. It is obvious that the educational system in Turkey has difficulty in meeting the needs, and encounters many problems due to its present centralist state. Educational decentralization can provide effective solutions for stakeholder engagement, educational financing and for problems in decision making and operation within the education system. However, the present state of local governments, the legal framework, geographical, cultural and social features indicate that Turkey’s conditions are not ready for decentralization in education. A decentralization model realized in the long run according to Turkey’s conditions, and as a result of a social consensus, can help resolve the problems of the Turkish education system.

  18. FISCAL DECENTRALIZATION IN THE DRC: EVIDENCE OFREVENUE ASSIGNMENT

    Directory of Open Access Journals (Sweden)

    Angelita Kithatu-Kiwekete

    2017-07-01

    Full Text Available The rationalefor central government to devolve resources for service provisionhas been debated in decentralization literature. Decentralization enhancesdemocracy,encouragesparticipation in local development initiativesandpromotes local political accountability.This discourse has been complemented bythe implementation of fiscal decentralization to increase the ability of sub-nationalgovernment in financing municipal service delivery. Fiscal decentralization hasoften been adopted by African statessince the onset ofthe New PublicManagement erain an effortto improvethe standard ofgovernance. The concernis that African states have taken minimal steps to adopt fiscal devolution thatpromotes revenue assignment which in turn limits sub-nationalgovernments’ability to generate own source revenues.This article examines the revenue assignment function of fiscal decentralization inthe Democratic Republic of Congo(DRCinthelight of decentralizationconcerns that have been raised by civil society, as the country charts its course todemocracy. The article is a desktop study that will consider documents andpoliciesin theDRCon thenational, provincialand locallevel as far asstaterevenue sourcesare concerned. Revenue assignment should enable DRC’sprovinces and local authoritiestogeneratesignificantrevenueindependently.However, post-conflict reconstruction and development efforts in the Great Lakesregion and in the DRC have largely isolated decentralization which wouldotherwise entrench local fiscalautonomy infinancing for local services anddevelopment. The article concludes that revenue generation for local authoritiesandtheprovinces in the DRC is still very centralised by the national government.Thearticleproposes policy recommendations that will be useful for the country toensurethatdecentralization effortsinclude fiscal devolution toenhance thefinancing for local development initiatives.

  19. Declarative Modelling and Safe Distribution of Healthcare Workflows

    DEFF Research Database (Denmark)

    Hildebrandt, Thomas; Mukkamala, Raghava Rao; Slaats, Tijs

    2012-01-01

    We present a formal technique for safe distribution of workflow processes described declaratively as Nested Condition Response (NCR) Graphs and apply the technique to a distributed healthcare workflow. Concretely, we provide a method to synthesize from a NCR Graph and any distribution of its events......-organizational case management. The contributions of this paper is to adapt the technique to allow for nested processes and milestones and to apply it to a healthcare workflow identified in a previous field study at danish hospitals....

  20. Building and documenting workflows with python-based snakemake

    OpenAIRE

    Köster, Johannes; Rahmann, Sven

    2012-01-01

    textabstractSnakemake is a novel workflow engine with a simple Python-derived workflow definition language and an optimizing execution environment. It is the first system that supports multiple named wildcards (or variables) in input and output filenames of each rule definition. It also allows to write human-readable workflows that document themselves. We have found Snakemake especially useful for building high-throughput sequencing data analysis pipelines and present examples from this area....

  1. Automated data reduction workflows for astronomy. The ESO Reflex environment

    Science.gov (United States)

    Freudling, W.; Romaniello, M.; Bramich, D. M.; Ballester, P.; Forchi, V.; García-Dabló, C. E.; Moehler, S.; Neeser, M. J.

    2013-11-01

    Context. Data from complex modern astronomical instruments often consist of a large number of different science and calibration files, and their reduction requires a variety of software tools. The execution chain of the tools represents a complex workflow that needs to be tuned and supervised, often by individual researchers that are not necessarily experts for any specific instrument. Aims: The efficiency of data reduction can be improved by using automatic workflows to organise data and execute a sequence of data reduction steps. To realize such efficiency gains, we designed a system that allows intuitive representation, execution and modification of the data reduction workflow, and has facilities for inspection and interaction with the data. Methods: The European Southern Observatory (ESO) has developed Reflex, an environment to automate data reduction workflows. Reflex is implemented as a package of customized components for the Kepler workflow engine. Kepler provides the graphical user interface to create an executable flowchart-like representation of the data reduction process. Key features of Reflex are a rule-based data organiser, infrastructure to re-use results, thorough book-keeping, data progeny tracking, interactive user interfaces, and a novel concept to exploit information created during data organisation for the workflow execution. Results: Automated workflows can greatly increase the efficiency of astronomical data reduction. In Reflex, workflows can be run non-interactively as a first step. Subsequent optimization can then be carried out while transparently re-using all unchanged intermediate products. We found that such workflows enable the reduction of complex data by non-expert users and minimizes mistakes due to book-keeping errors. Conclusions: Reflex includes novel concepts to increase the efficiency of astronomical data processing. While Reflex is a specific implementation of astronomical scientific workflows within the Kepler workflow

  2. A Model of Workflow Composition for Emergency Management

    Science.gov (United States)

    Xin, Chen; Bin-ge, Cui; Feng, Zhang; Xue-hui, Xu; Shan-shan, Fu

    The common-used workflow technology is not flexible enough in dealing with concurrent emergency situations. The paper proposes a novel model for defining emergency plans, in which workflow segments appear as a constituent part. A formal abstraction, which contains four operations, is defined to compose workflow segments under constraint rule. The software system of the business process resources construction and composition is implemented and integrated into Emergency Plan Management Application System.

  3. NASA scheduling technologies

    Science.gov (United States)

    Adair, Jerry R.

    1994-01-01

    This paper is a consolidated report on ten major planning and scheduling systems that have been developed by the National Aeronautics and Space Administration (NASA). A description of each system, its components, and how it could be potentially used in private industry is provided in this paper. The planning and scheduling technology represented by the systems ranges from activity based scheduling employing artificial intelligence (AI) techniques to constraint based, iterative repair scheduling. The space related application domains in which the systems have been deployed vary from Space Shuttle monitoring during launch countdown to long term Hubble Space Telescope (HST) scheduling. This paper also describes any correlation that may exist between the work done on different planning and scheduling systems. Finally, this paper documents the lessons learned from the work and research performed in planning and scheduling technology and describes the areas where future work will be conducted.

  4. Decentralized resource allocation and load scheduling for multicommodity smart energy systems

    NARCIS (Netherlands)

    Blaauwbroek, N.; Nguyen, H.P.; Konsman, M.J.; Shi, H.; Kamphuis, I.G.; Kling, W.L.

    2015-01-01

    Due to the expected growth in district heating systems in combination with the development of hybrid energy appliances such as heat pumps (HPs) and micro-combined heat and power (CHP) installations, new opportunities arise for the management of multicommodity energy systems, including electricity,

  5. A Decentralized Approach to Formation Flight Routing

    NARCIS (Netherlands)

    Visser, H.G.; Lopes dos Santos, Bruno F.; Verhagen, C.M.A.

    2016-01-01

    This paper describes the development of an optimization-based cooperative planning system for the efficient routing and scheduling of flight formations. This study considers the use of formation flight as a means to reduce the overall fuel consumption of civil aviation in long-haul operations. It

  6. Business and scientific workflows a web service-oriented approach

    CERN Document Server

    Tan, Wei

    2013-01-01

    Focuses on how to use web service computing and service-based workflow technologies to develop timely, effective workflows for both business and scientific fields Utilizing web computing and Service-Oriented Architecture (SOA), Business and Scientific Workflows: A Web Service-Oriented Approach focuses on how to design, analyze, and deploy web service-based workflows for both business and scientific applications in many areas of healthcare and biomedicine. It also discusses and presents the recent research and development results. This informative reference features app

  7. Design, Modelling and Analysis of a Workflow Reconfiguration

    DEFF Research Database (Denmark)

    Mazzara, Manuel; Abouzaid, Faisal; Dragoni, Nicola

    2011-01-01

    This paper describes a case study involving the reconfiguration of an office workflow. We state the requirements on a system implementing the workflow and its reconfiguration, and describe the system’s design in BPMN. We then use an asynchronous pi-calculus and Web.1 to model the design and to ve......This paper describes a case study involving the reconfiguration of an office workflow. We state the requirements on a system implementing the workflow and its reconfiguration, and describe the system’s design in BPMN. We then use an asynchronous pi-calculus and Web.1 to model the design...

  8. A Strategy for an MLS Workflow Management System

    National Research Council Canada - National Science Library

    Kang, Myong H; Froscher, Judith N; Eppinger, Brian J; Moskowitz, Ira S

    1999-01-01

    .... Therefore, DoD needs MLS workflow management systems (WFMS) to enable globally distributed users and existing applications to cooperate across classification domains to achieve mission critical goals...

  9. Modeling Complex Workflow in Molecular Diagnostics

    Science.gov (United States)

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  10. Deriving DICOM surgical extensions from surgical workflows

    Science.gov (United States)

    Burgert, O.; Neumuth, T.; Gessat, M.; Jacobs, S.; Lemke, H. U.

    2007-03-01

    The generation, storage, transfer, and representation of image data in radiology are standardized by DICOM. To cover the needs of image guided surgery or computer assisted surgery in general one needs to handle patient information besides image data. A large number of objects must be defined in DICOM to address the needs of surgery. We propose an analysis process based on Surgical Workflows that helps to identify these objects together with use cases and requirements motivating for their specification. As the first result we confirmed the need for the specification of representation and transfer of geometric models. The analysis of Surgical Workflows has shown that geometric models are widely used to represent planned procedure steps, surgical tools, anatomical structures, or prosthesis in the context of surgical planning, image guided surgery, augmented reality, and simulation. By now, the models are stored and transferred in several file formats bare of contextual information. The standardization of data types including contextual information and specifications for handling of geometric models allows a broader usage of such models. This paper explains the specification process leading to Geometry Mesh Service Object Pair classes. This process can be a template for the definition of further DICOM classes.

  11. Workflow management for a cosmology collaboratory

    International Nuclear Information System (INIS)

    Loken, Stewart C.; McParland, Charles

    2001-01-01

    The Nearby Supernova Factory Project will provide a unique opportunity to bring together simulation and observation to address crucial problems in particle and nuclear physics. Its goal is to significantly enhance our understanding of the nuclear processes in supernovae and to improve our ability to use both Type Ia and Type II supernovae as reference light sources (standard candles) in precision measurements of cosmological parameters. Over the past several years, astronomers and astrophysicists have been conducting in-depth sky searches with the goal of identifying supernovae in their earliest evolutionary stages and, during the 4 to 8 weeks of their most ''explosive'' activity, measure their changing magnitude and spectra. The search program currently under development at LBNL is an earth-based observation program utilizing observational instruments at Haleakala and Mauna Kea, Hawaii and Mt. Palomar, California. This new program provides a demanding testbed for the integration of computational, data management and collaboratory technologies. A critical element of this effort is the use of emerging workflow management tools to permit collaborating scientists to manage data processing and storage and to integrate advanced supernova simulation into the real-time control of the experiments. This paper describes the workflow management framework for the project, discusses security and resource allocation requirements and reviews emerging tools to support this important aspect of collaborative work

  12. The Prosthetic Workflow in the Digital Era

    Directory of Open Access Journals (Sweden)

    Lidia Tordiglione

    2016-01-01

    Full Text Available The purpose of this retrospective study was to clinically evaluate the benefits of adopting a full digital workflow for the implementation of fixed prosthetic restorations on natural teeth. To evaluate the effectiveness of these protocols, treatment plans were drawn up for 15 patients requiring rehabilitation of one or more natural teeth. All the dental impressions were taken using a Planmeca PlanScan® (Planmeca OY, Helsinki, Finland intraoral scanner, which provided digital casts on which the restorations were digitally designed using Exocad® (Exocad GmbH, Germany, 2010 software and fabricated by CAM processing on 5-axis milling machines. A total of 28 single crowns were made from monolithic zirconia, 12 vestibular veneers from lithium disilicate, and 4 three-quarter vestibular veneers with palatal extension. While the restorations were applied, the authors could clinically appreciate the excellent match between the digitally produced prosthetic design and the cemented prostheses, which never required any occlusal or proximal adjustment. Out of all the restorations applied, only one exhibited premature failure and was replaced with no other complications or need for further scanning. From the clinical experience gained using a full digital workflow, the authors can confirm that these work processes enable the fabrication of clinically reliable restorations, with all the benefits that digital methods bring to the dentist, the dental laboratory, and the patient.

  13. Multi-level meta-workflows: new concept for regularly occurring tasks in quantum chemistry.

    Science.gov (United States)

    Arshad, Junaid; Hoffmann, Alexander; Gesing, Sandra; Grunzke, Richard; Krüger, Jens; Kiss, Tamas; Herres-Pawlis, Sonja; Terstyanszky, Gabor

    2016-01-01

    In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.

  14. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  15. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Iván Tomás Cotes-Ruiz

    Full Text Available Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS. The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  16. Comparative LCA of decentralized wastewater treatment alternatives for non-potable urban reuse.

    Science.gov (United States)

    Opher, Tamar; Friedler, Eran

    2016-11-01

    Municipal wastewater (WW) effluent represents a reliable and significant source for reclaimed water, very much needed nowadays. Water reclamation and reuse has become an attractive option for conserving and extending available water sources. The decentralized approach to domestic WW treatment benefits from the advantages of source separation, which makes available simple small-scale systems and on-site reuse, which can be constructed on a short time schedule and occasionally upgraded with new technological developments. In this study we perform a Life Cycle Assessment to compare between the environmental impacts of four alternatives for a hypothetical city's water-wastewater service system. The baseline alternative is the most common, centralized approach for WW treatment, in which WW is conveyed to and treated in a large wastewater treatment plant (WWTP) and is then discharged to a stream. The three alternatives represent different scales of distribution of the WW treatment phase, along with urban irrigation and domestic non-potable water reuse (toilet flushing). The first alternative includes centralized treatment at a WWTP, with part of the reclaimed WW (RWW) supplied back to the urban consumers. The second and third alternatives implement de-centralized greywater (GW) treatment with local reuse, one at cluster level (320 households) and one at building level (40 households). Life cycle impact assessment results show a consistent disadvantage of the prevailing centralized approach under local conditions in Israel, where seawater desalination is the marginal source of water supply. The alternative of source separation and GW reuse at cluster level seems to be the most preferable one, though its environmental performance is only slightly better than GW reuse at building level. Centralized WW treatment with urban reuse of WWTP effluents is not advantageous over decentralized treatment of GW because the supply of RWW back to consumers is very costly in materials and

  17. Security for decentralized health information systems.

    Science.gov (United States)

    Bleumer, G

    1994-02-01

    Health care information systems must reflect at least two basic characteristics of the health care community: the increasing mobility of patients and the personal liability of everyone giving medical treatment. Open distributed information systems bear the potential to reflect these requirements. But the market for open information systems and operating systems hardly provides secure products today. This 'missing link' is approached by the prototype SECURE Talk that provides secure transmission and archiving of files on top of an existing operating system. Its services may be utilized by existing medical applications. SECURE Talk demonstrates secure communication utilizing only standard hardware. Its message is that cryptography (and in particular asymmetric cryptography) is practical for many medical applications even if implemented in software. All mechanisms are software implemented in order to be executable on standard-hardware. One can investigate more or less decentralized forms of public key management and the performance of many different cryptographic mechanisms. That of, e.g. hybrid encryption and decryption (RSA+DES-PCBC) is about 300 kbit/s. That of signing and verifying is approximately the same using RSA with a DES hash function. The internal speed, without disk accesses etc., is about 1.1 Mbit/s. (Apple Quadra 950 (MC 68040, 33 MHz, RAM: 20 MB, 80 ns. Length of RSA modulus is 512 bit).

  18. Platform Architecture for Decentralized Positioning Systems

    Directory of Open Access Journals (Sweden)

    Zakaria Kasmi

    2017-04-01

    Full Text Available A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system.

  19. Decentralized provenance-aware publishing with nanopublications

    Directory of Open Access Journals (Sweden)

    Tobias Kuhn

    2016-08-01

    Full Text Available Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable.

  20. Accelerating the scientific exploration process with scientific workflows

    International Nuclear Information System (INIS)

    Altintas, Ilkay; Barney, Oscar; Cheng, Zhengang; Critchlow, Terence; Ludaescher, Bertram; Parker, Steve; Shoshani, Arie; Vouk, Mladen

    2006-01-01

    Although an increasing amount of middleware has emerged in the last few years to achieve remote data access, distributed job execution, and data management, orchestrating these technologies with minimal overhead still remains a difficult task for scientists. Scientific workflow systems improve this situation by creating interfaces to a variety of technologies and automating the execution and monitoring of the workflows. Workflow systems provide domain-independent customizable interfaces and tools that combine different tools and technologies along with efficient methods for using them. As simulations and experiments move into the petascale regime, the orchestration of long running data and compute intensive tasks is becoming a major requirement for the successful steering and completion of scientific investigations. A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions of a scientific problem. Kepler is a cross-project collaboration, co-founded by the SciDAC Scientific Data Management (SDM) Center, whose purpose is to develop a domain-independent scientific workflow system. It provides a workflow environment in which scientists design and execute scientific workflows by specifying the desired sequence of computational actions and the appropriate data flow, including required data transformations, between these steps. Currently deployed workflows range from local analytical pipelines to distributed, high-performance and high-throughput applications, which can be both data- and compute-intensive. The scientific workflow approach offers a number of advantages over traditional scripting-based approaches, including ease of configuration, improved reusability and maintenance of workflows and components (called actors), automated provenance management, 'smart' re-running of different versions of workflow instances, on-the-fly updateable parameters, monitoring

  1. Biowep: a workflow enactment portal for bioinformatics applications.

    Science.gov (United States)

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-03-08

    The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of

  2. Biowep: a workflow enactment portal for bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Romano Paolo

    2007-03-01

    Full Text Available Abstract Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS, can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical

  3. Towards seamless workflows in agile data science

    Science.gov (United States)

    Klump, J. F.; Robertson, J.

    2017-12-01

    Agile workflows are a response to projects with requirements that may change over time. They prioritise rapid and flexible responses to change, preferring to adapt to changes in requirements rather than predict them before a project starts. This suits the needs of research very well because research is inherently agile in its methodology. The adoption of agile methods has made collaborative data analysis much easier in a research environment fragmented across institutional data stores, HPC, personal and lab computers and more recently cloud environments. Agile workflows use tools that share a common worldview: in an agile environment, there may be more that one valid version of data, code or environment in play at any given time. All of these versions need references and identifiers. For example, a team of developers following the git-flow conventions (github.com/nvie/gitflow) may have several active branches, one for each strand of development. These workflows allow rapid and parallel iteration while maintaining identifiers pointing to individual snapshots of data and code and allowing rapid switching between strands. In contrast, the current focus of versioning in research data management is geared towards managing data for reproducibility and long-term preservation of the record of science. While both are important goals in the persistent curation domain of the institutional research data infrastructure, current tools emphasise planning over adaptation and can introduce unwanted rigidity by insisting on a single valid version or point of truth. In the collaborative curation domain of a research project, things are more fluid. However, there is no equivalent to the "versioning iso-surface" of the git protocol for the management and versioning of research data. At CSIRO we are developing concepts and tools for the agile management of software code and research data for virtual research environments, based on our experiences of actual data analytics projects in the

  4. Scheduling for decommissioning projects

    International Nuclear Information System (INIS)

    Podmajersky, O.E.

    1987-01-01

    This paper describes the Project Scheduling system being employed by the Decommissioning Operations Contractor at the Shippingport Station Decommissioning Project (SSDP). Results from the planning system show that the project continues to achieve its cost and schedule goals. An integrated cost and schedule control system (C/SCS) which uses the concept of earned value for measurement of performance was instituted in accordance with DOE orders. The schedule and cost variances generated by the C/SCS system are used to confirm management's assessment of project status. This paper describes the types of schedules and tools used on the SSDP project to plan and monitor the work, and identifies factors that are unique to a decommissioning project that make scheduling critical to the achievement of the project's goals. 1 fig

  5. Program reference schedule baseline

    International Nuclear Information System (INIS)

    1986-07-01

    This Program Reference Schedule Baseline (PRSB) provides the baseline Program-level milestones and associated schedules for the Civilian Radioactive Waste Management Program. It integrates all Program-level schedule-related activities. This schedule baseline will be used by the Director, Office of Civilian Radioactive Waste Management (OCRWM), and his staff to monitor compliance with Program objectives. Chapter 1 includes brief discussions concerning the relationship of the PRSB to the Program Reference Cost Baseline (PRCB), the Mission Plan, the Project Decision Schedule, the Total System Life Cycle Cost report, the Program Management Information System report, the Program Milestone Review, annual budget preparation, and system element plans. Chapter 2 includes the identification of all Level 0, or Program-level, milestones, while Chapter 3 presents and discusses the critical path schedules that correspond to those Level 0 milestones

  6. Approximating Preemptive Stochastic Scheduling

    OpenAIRE

    Megow Nicole; Vredeveld Tjark

    2009-01-01

    We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...

  7. Revisiting Symbiotic Job Scheduling

    OpenAIRE

    Eyerman , Stijn; Michaud , Pierre; Rogiest , Wouter

    2015-01-01

    International audience; —Symbiotic job scheduling exploits the fact that in a system with shared resources, the performance of jobs is impacted by the behavior of other co-running jobs. By coscheduling combinations of jobs that have low interference, the performance of a system can be increased. In this paper, we investigate the impact of using symbiotic job scheduling for increasing throughput. We find that even for a theoretically optimal scheduler, this impact is very low, despite the subs...

  8. Pathology economic model tool: a novel approach to workflow and budget cost analysis in an anatomic pathology laboratory.

    Science.gov (United States)

    Muirhead, David; Aoun, Patricia; Powell, Michael; Juncker, Flemming; Mollerup, Jens

    2010-08-01

    The need for higher efficiency, maximum quality, and faster turnaround time is a continuous focus for anatomic pathology laboratories and drives changes in work scheduling, instrumentation, and management control systems. To determine the costs of generating routine, special, and immunohistochemical microscopic slides in a large, academic anatomic pathology laboratory using a top-down approach. The Pathology Economic Model Tool was used to analyze workflow processes at The Nebraska Medical Center's anatomic pathology laboratory. Data from the analysis were used to generate complete cost estimates, which included not only materials, consumables, and instrumentation but also specific labor and overhead components for each of the laboratory's subareas. The cost data generated by the Pathology Economic Model Tool were compared with the cost estimates generated using relative value units. Despite the use of automated systems for different processes, the workflow in the laboratory was found to be relatively labor intensive. The effect of labor and overhead on per-slide costs was significantly underestimated by traditional relative-value unit calculations when compared with the Pathology Economic Model Tool. Specific workflow defects with significant contributions to the cost per slide were identified. The cost of providing routine, special, and immunohistochemical slides may be significantly underestimated by traditional methods that rely on relative value units. Furthermore, a comprehensive analysis may identify specific workflow processes requiring improvement.

  9. Managing Library IT Workflow with Bugzilla

    Directory of Open Access Journals (Sweden)

    Nina McHale

    2010-09-01

    Full Text Available Prior to September 2008, all technology issues at the University of Colorado Denver's Auraria Library were reported to a dedicated departmental phone line. A variety of staff changes necessitated a more formal means of tracking, delegating, and resolving reported issues, and the department turned to Bugzilla, an open source bug tracking application designed by Mozilla.org developers. While designed with software development bug tracking in mind, Bugzilla can be easily customized and modified to serve as an IT ticketing system. Twenty-three months and over 2300 trouble tickets later, Auraria's IT department workflow is much smoother and more efficient. This article includes two Perl Template Toolkit code samples for customized Bugzilla screens for its use in a library environment; readers will be able to easily replicate the project in their own environments.

  10. Swabs to genomes: a comprehensive workflow

    Directory of Open Access Journals (Sweden)

    Madison I. Dunitz

    2015-05-01

    Full Text Available The sequencing, assembly, and basic analysis of microbial genomes, once a painstaking and expensive undertaking, has become much easier for research labs with access to standard molecular biology and computational tools. However, there are a confusing variety of options available for DNA library preparation and sequencing, and inexperience with bioinformatics can pose a significant barrier to entry for many who may be interested in microbial genomics. The objective of the present study was to design, test, troubleshoot, and publish a simple, comprehensive workflow from the collection of an environmental sample (a swab to a published microbial genome; empowering even a lab or classroom with limited resources and bioinformatics experience to perform it.

  11. The P2P approach to interorganizational workflows

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Weske, M.H.; Dittrich, K.R.; Geppert, A.; Norrie, M.C.

    2001-01-01

    This paper describes in an informal way the Public-To-Private (P2P) approach to interorganizational workflows, which is based on a notion of inheritance. The approach consists of three steps: (1) create a common understanding of the interorganizational workflow by specifying a shared public

  12. Reasoning about repairability of workflows at design time

    NARCIS (Netherlands)

    Tagni, Gaston; Ten Teije, Annette; Van Harmelen, Frank

    2009-01-01

    This paper describes an approach for reasoning about the repairability of workflows at design time. We propose a heuristic-based analysis of a workflow that aims at evaluating its definition, considering different design aspects and characteristics that affect its repairability (called repairability

  13. Design decisions in workflow management and quality of work.

    NARCIS (Netherlands)

    Waal, B.M.E. de; Batenburg, R.

    2009-01-01

    In this paper, the design and implementation of a workflow management (WFM) system in a large Dutch social insurance organisation is described. The effect of workflow design decisions on the quality of work is explored theoretically and empirically, using the model of Zur Mühlen as a frame of

  14. Conceptual framework and architecture for service mediating workflow management

    NARCIS (Netherlands)

    Hu, Jinmin; Grefen, P.W.P.J.

    2003-01-01

    This paper proposes a three-layer workflow concept framework to realize workflow enactment flexibility by dynamically binding activities to their implementations at run time. A service mediating layer is added to bridge business process definition and its implementation. Based on this framework, we

  15. Building and documenting workflows with python-based snakemake

    NARCIS (Netherlands)

    J. Köster (Johannes); S. Rahmann (Sven)

    2012-01-01

    textabstractSnakemake is a novel workflow engine with a simple Python-derived workflow definition language and an optimizing execution environment. It is the first system that supports multiple named wildcards (or variables) in input and output filenames of each rule definition. It also allows to

  16. Analyzing the Gap between Workflows and their Natural Language Descriptions

    NARCIS (Netherlands)

    Groth, P.T.; Gil, Y

    2009-01-01

    Scientists increasingly use workflows to represent and share their computational experiments. Because of their declarative nature, focus on pre-existing component composition and the availability of visual editors, workflows provide a valuable start for creating user-friendly environments for end

  17. Modelling and analysis of workflow for lean supply chains

    Science.gov (United States)

    Ma, Jinping; Wang, Kanliang; Xu, Lida

    2011-11-01

    Cross-organisational workflow systems are a component of enterprise information systems which support collaborative business process among organisations in supply chain. Currently, the majority of workflow systems is developed in perspectives of information modelling without considering actual requirements of supply chain management. In this article, we focus on the modelling and analysis of the cross-organisational workflow systems in the context of lean supply chain (LSC) using Petri nets. First, the article describes the assumed conditions of cross-organisation workflow net according to the idea of LSC and then discusses the standardisation of collaborating business process between organisations in the context of LSC. Second, the concept of labelled time Petri nets (LTPNs) is defined through combining labelled Petri nets with time Petri nets, and the concept of labelled time workflow nets (LTWNs) is also defined based on LTPNs. Cross-organisational labelled time workflow nets (CLTWNs) is then defined based on LTWNs. Third, the article proposes the notion of OR-silent CLTWNS and a verifying approach to the soundness of LTWNs and CLTWNs. Finally, this article illustrates how to use the proposed method by a simple example. The purpose of this research is to establish a formal method of modelling and analysis of workflow systems for LSC. This study initiates a new perspective of research on cross-organisational workflow management and promotes operation management of LSC in real world settings.

  18. Two-Layer Transaction Management for Workflow Management Applications

    NARCIS (Netherlands)

    Grefen, P.W.P.J.; Vonk, J.; Boertjes, E.M.; Apers, Peter M.G.

    Workflow management applications require advanced transaction management that is not offered by traditional database systems. For this reason, a number of extended transaction models has been proposed in the past. None of these models seems completely adequate, though, because workflow management

  19. Parametric Room Acoustic workflows with real-time acoustic simulation

    DEFF Research Database (Denmark)

    Parigi, Dario

    2017-01-01

    The paper investigates and assesses the opportunities that real-time acoustic simulation offer to engage in parametric acoustics workflow and to influence architectural designs from early design stages......The paper investigates and assesses the opportunities that real-time acoustic simulation offer to engage in parametric acoustics workflow and to influence architectural designs from early design stages...

  20. Open source workflow : a viable direction for BPM?

    NARCIS (Netherlands)

    Wohed, P.; Russell, N.C.; Hofstede, ter A.H.M.; Andersson, B.; Aalst, van der W.M.P.; Bellahsène, Z.; Léonard, M.

    2008-01-01

    With the growing interest in open source software in general and business process management and workflow systems in particular, it is worthwhile investigating the state of open source workflow management. The plethora of these offerings (recent surveys such as [4,6], each contain more than 30 such

  1. Distributed Global Transaction Support for Workflow Management Applications

    NARCIS (Netherlands)

    Vonk, J.; Grefen, P.W.P.J.; Boertjes, E.M.; Apers, Peter M.G.

    Workflow management systems require advanced transaction support to cope with their inherently long-running processes. The recent trend to distribute workflow executions requires an even more advanced transaction support system that is able to handle distribution. This paper presents a model as well

  2. A Collaborative Workflow for the Digitization of Unique Materials

    Science.gov (United States)

    Gueguen, Gretchen; Hanlon, Ann M.

    2009-01-01

    This paper examines the experience of one institution, the University of Maryland Libraries, as it made organizational efforts to harness existing workflows and to capture digitization done in the course of responding to patron requests. By examining the way this organization adjusted its existing workflows to put in place more systematic methods…

  3. "Intelligent" tools for workflow process redesign : a research agenda

    NARCIS (Netherlands)

    Netjes, M.; Vanderfeesten, I.T.P.; Reijers, H.A.; Bussler, C.; Haller, A.

    2006-01-01

    Although much attention is being paid to business processes during the past decades, the design of business processes and particularly workflow processes is still more art than science. In this workshop paper, we present our view on modeling methods for workflow processes and introduce our research

  4. Workflow automation based on OSI job transfer and manipulation

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Joosten, Stef M.M.; Guareis de farias, Cléver

    1999-01-01

    This paper shows that Workflow Management Systems (WFMS) and a data communication standard called Job Transfer and Manipulation (JTM) are built on the same concepts, even though different words are used. The paper analyses the correspondence of workflow concepts and JTM concepts. Besides, the

  5. From Paper Based Clinical Practice Guidelines to Declarative Workflow Management

    DEFF Research Database (Denmark)

    Lyng, Karen Marie; Hildebrandt, Thomas; Mukkamala, Raghava Rao

    2009-01-01

    a sub workflow can be described in a declarative workflow management system: the Resultmaker Online Consultant (ROC). The example demonstrates that declarative primitives allow to naturally extend the paper based flowchart to an executable model without introducing a complex cyclic control flow graph....

  6. Decentralized data systems - results and recommendations

    International Nuclear Information System (INIS)

    Parr, V.B.; McCullough, L.D.; Tashjian, B.M.; Shirley, R.D.

    1982-05-01

    A Decentralized Data Systems (DDS) is defined as a utility, industry, or regulatory agency data system dealing with power plant performance. DDSs in use or planned for use on an industry, regional, or utility basis have not been studied in sufficient detail to identify methods of coordinating them with the Information System for Generation Availability (ISGA), which is under development. A survey of utility, industry, and regulatory agency DDSs was made by Southwest Research Institute. Information was gathered on twelve utility data systems, two industry data systems, two regulatory agency data systems and one government-owned utility data system. The objectives of this study are to identify existing DDSs that are potential candidates for integration into the ISGA, and to identify methods by which that integration can be accomplished. A matrix of the data elements and formats was prepared for the data systems, which allowed comparison to determine which DDSs were potential candidates for integration into the ISGA. Utility data systems emphasize outage data collection. A composite of GADS, NPRDS, and piece-part data from Utah Power and Light encompass nearly all data elements identified in the survey. Of the computer system configurations considered as potentially viable for ISGA, integrated, centralized, interfaced, and distributed, the authors believe the centralized system for data retrieval is the least expensive to implement, and the most acceptable to the users. An in-depth study of ISGA hadware/software options is the subject of another EPRI contract. The information and system configuration overviews presented in this report will support that effort

  7. A Multi-Dimensional Classification Model for Scientific Workflow Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Ramakrishnan, Lavanya; Plale, Beth

    2010-04-05

    Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.

  8. A practical workflow for making anatomical atlases for biological research.

    Science.gov (United States)

    Wan, Yong; Lewis, A Kelsey; Colasanto, Mary; van Langeveld, Mark; Kardon, Gabrielle; Hansen, Charles

    2012-01-01

    The anatomical atlas has been at the intersection of science and art for centuries. These atlases are essential to biological research, but high-quality atlases are often scarce. Recent advances in imaging technology have made high-quality 3D atlases possible. However, until now there has been a lack of practical workflows using standard tools to generate atlases from images of biological samples. With certain adaptations, CG artists' workflow and tools, traditionally used in the film industry, are practical for building high-quality biological atlases. Researchers have developed a workflow for generating a 3D anatomical atlas using accessible artists' tools. They used this workflow to build a mouse limb atlas for studying the musculoskeletal system's development. This research aims to raise the awareness of using artists' tools in scientific research and promote interdisciplinary collaborations between artists and scientists. This video (http://youtu.be/g61C-nia9ms) demonstrates a workflow for creating an anatomical atlas.

  9. Federated Database Services for Wind Tunnel Experiment Workflows

    Directory of Open Access Journals (Sweden)

    A. Paventhan

    2006-01-01

    Full Text Available Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support effective data management, near-realtime data movement and custom data processing. Many existing solutions exploit the database as a passive metadata catalog. In this paper, we present an approach that makes use of federation of databases to host data-centric wind tunnel application workflows. The user is able to compose customized application workflows based on database services. We provide a reference implementation that leverages typical business tools and technologies: Microsoft SQL Server for database services and Windows Workflow Foundation for workflow services. The application data and user's code are both hosted in federated databases. With the growing interest in XML Web Services in scientific Grids, and with databases beginning to support native XML types and XML Web services, we can expect the role of databases in scientific computation to grow in importance.

  10. Centralization or decentralization of facial structures in Korean young adults.

    Science.gov (United States)

    Yoo, Ja-Young; Kim, Jeong-Nam; Shin, Kang-Jae; Kim, Soon-Heum; Choi, Hyun-Gon; Jeon, Hyun-Soo; Koh, Ki-Seok; Song, Wu-Chul

    2013-05-01

    It is well known that facial beauty is dictated by facial type, and harmony between the eyes, nose, and mouth. Furthermore, facial impression is judged according to the overall facial contour and the relationship between the facial structures. The aims of the present study were to determine the optimal criteria for the assessment of gathering or separation of the facial structures and to define standardized ratios for centralization or decentralization of the facial structures.Four different lengths were measured, and 2 indexes were calculated from standardized photographs of 551 volunteers. Centralization and decentralization were assessed using the width index (interpupillary distance / facial width) and height index (eyes-mouth distance / facial height). The mean ranges of the width index and height index were 42.0 to 45.0 and 36.0 to 39.0, respectively. The width index did not differ with sex, but males had more decentralized faces, and females had more centralized faces, vertically. The incidence rate of decentralized faces among the men was 30.3%, and that of centralized faces among the women was 25.2%.The mean ranges in width and height indexes have been determined in a Korean population. Faces with width and height index scores under and over the median ranges are determined to be "centralized" and "decentralized," respectively.

  11. Radiology information system: a workflow-based approach

    International Nuclear Information System (INIS)

    Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; Aalst, W.M.P. van der

    2009-01-01

    Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare. (orig.)

  12. Distributed execution of aggregated multi domain workflows using an agent framework

    NARCIS (Netherlands)

    Zhao, Z.; Belloum, A.; de Laat, C.; Adriaans, P.; Hertzberger, B.; Zhang, L.J.; Watson, T.J.; Yang, J.; Hung, P.C.K.

    2007-01-01

    In e-Science, meaningful experiment processes and workflow engines emerge as important scientific resources. A complex experiment often involves services and processes developed in different scientific domains. Aggregating different workflows into one meta workflow avoids unnecessary rewriting of

  13. Alternative Work Schedules: Definitions

    Science.gov (United States)

    Journal of the College and University Personnel Association, 1977

    1977-01-01

    The term "alternative work schedules" encompasses any variation of the requirement that all permanent employees in an organization or one shift of employees adhere to the same five-day, seven-to-eight-hour schedule. This article defines staggered hours, flexible working hours (flexitour and gliding time), compressed work week, the task system, and…

  14. Range Scheduling Aid (RSA)

    Science.gov (United States)

    Logan, J. R.; Pulvermacher, M. K.

    1991-01-01

    Range Scheduling Aid (RSA) is presented in the form of the viewgraphs. The following subject areas are covered: satellite control network; current and new approaches to range scheduling; MITRE tasking; RSA features; RSA display; constraint based analytic capability; RSA architecture; and RSA benefits.

  15. The triangle scheduling problem

    NARCIS (Netherlands)

    Dürr, Christoph; Hanzálek, Zdeněk; Konrad, Christian; Seddik, Yasmina; Sitters, R.A.; Vásquez, Óscar C.; Woeginger, Gerhard

    2017-01-01

    This paper introduces a novel scheduling problem, where jobs occupy a triangular shape on the time line. This problem is motivated by scheduling jobs with different criticality levels. A measure is introduced, namely the binary tree ratio. It is shown that the Greedy algorithm solves the problem to

  16. A Retrospective Analysis of the Development of Fiscal Decentralization

    Directory of Open Access Journals (Sweden)

    Rekova Nataliia Yu.

    2017-12-01

    Full Text Available The study forms the theoretical basis for the implementation of fiscal decentralization in Ukraine on the basis of determining the correspondence between the evolution of scientific approaches to the formation of an effective model of public administration and the degree of power centralization at a particular stage of the development of society. The views of thinkers of the ancient states of Egypt, Mesopotamia, India, China, Rome, Greece are generalized, and the priority of centralized public administration without segregation of centralization forms is determined. The degree of centralization in the period of development of feudal states is characterized. The scientific views of representatives of the neoinstitutional direction of economic thought are analyzed in detail, and the stages of the formation of decentralization, in particular fiscal, as a separate theory, are defined. The stages of and the corresponding organizational and legislative documents for the implementation of decentralization in Ukraine are outlined, and its results are characterized.

  17. FISCAL DECENTRALIZATION DETERMINANTS AND LOCAL ECONOMIC DEVELOPMENT IN EU COUNTRIES

    Directory of Open Access Journals (Sweden)

    Anca Florentina GAVRILUŢĂ (VATAMANU

    2017-12-01

    Full Text Available This work aims to assess the impact of fiscal decentralization on local (regional development in the EU Member States while controlling for macroeconomic and local autonomy specific factors. Using a panel data approach with dynamic effects, we examined the implications of fiscal decentralization on local development across European Union countries over the 1990-2004 period. The novelty of the study is emphasized by including in the analysis a variable which tests local fiscal discipline, more exactly, Fiscal Rule Strength Index for local level of government. Our findings suggest that prosperity of regions, measured in GDP growth depends on variables such as characteristics of decentralization undertaken by each country or local fiscal discipline, confirming our primary hypothesis. This supports the view that recently implemented reforms aiming to enforce fiscal discipline following-up the Fiscal Compact strengthened the local budgetary framework and restrained, therefore, the local discretionary power to act towards development.

  18. The Bases of Federalism and Decentralization in Education

    Directory of Open Access Journals (Sweden)

    Carlos Ornelas

    2003-05-01

    Full Text Available This essay uses the Weberian-type ideal to define the conceptual bases of federalism and the decentralization of education. Classic federalism, ficticious federalism (corporativism, the origins and the indigenous version of the new federalism are discussed. We conclude that Mexican constitutional federalism is baroque and ambiguous. Based on theory and the experiences of various countries, bureaucratic centralism and its main characteristics are defined. As a contrast, a typology of educational decentralization is developed. Taken into account are its political, judicial and administrative definitions; a distinction is made between delegation and decentralization. It is argued that with the signing of the Agreement for the Modernization of Basic Education, the Mexican government sought to increase its legitimacy without losing control of education.

  19. Decentralized Economic Dispatch Scheme With Online Power Reserve for Microgrids

    DEFF Research Database (Denmark)

    Nutkani, I. U.; Loh, Poh Chiang; Wang, P.

    2017-01-01

    Decentralized economic operation schemes have several advantages when compared with the traditional centralized management system for microgrids. Specifically, decentralized schemes are more flexible, less computationally intensive, and easier to implement without relying on communication...... costs, their power ratings, and other necessary constraints, before deciding the DG dispatch priorities and droop characteristics. The proposed scheme also allows online power reserve to be set and regulated within the microgrid. This, together with the generation cost saved, has been verified...... infrastructure. Economic operation of existing decentralized schemes is also usually achieved by either tuning the droop characteristics of distributed generators (DGs) or prioritizing their dispatch order. For the latter, an earlier scheme has tried to prioritize the DG dispatch based on their no...

  20. Decentralized Interleaving of Paralleled Dc-Dc Buck Converters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Rodriguez, Miguel [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sinha, Mohit [University of Minnesota; Dhople, Sairaj [University of Minnesota; Poon, Jason [University of California at Berkeley

    2017-09-01

    We present a decentralized control strategy that yields switch interleaving among parallel connected dc-dc buck converters without communication. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work represents the first fully decentralized strategy for switch interleaving of paralleled dc-dc buck converters.

  1. The Dynamics of Decentralization Arrangements in Indonesia Constitutional System

    Directory of Open Access Journals (Sweden)

    Haposan Siallagan

    2016-06-01

    Full Text Available Local autonomy has long been implemented in Indonesia and has been experiencing a number of phases within governmental system. This writing is intended to fathom the dynamics of decentralization arrangement indeed. The discussion it-self shows that according to the substances in a number of decentralization policy which had/has been issued, the dynamics of local autonomy arrangements (as regulated in many decentralization policies are inclining to be captured in a broad meaning, which is frequently known as a broadest local autonomy. Through local autonomy mechanism, local governments are given a flexibility in order to manage and administer their own domestic household. In order to maximize the implementation of widest local autonomy, local government has to be pushed to be well prepared in handling many local governmental tasks. Such preparations are related to human resources capacity, the competences in running the tasks, and financial management capacity.

  2. Decentralized Optimization for a Novel Control Structure of HVAC System

    Directory of Open Access Journals (Sweden)

    Shiqiang Wang

    2016-01-01

    Full Text Available A decentralized control structure is introduced into the heating, ventilation, and air conditioning (HVAC system to solve the high maintenance and labor cost problem in actual engineering. Based on this new control system, a decentralized optimization method is presented for sensor fault repair and optimal group control of HVAC equipment. Convergence property of the novel method is theoretically analyzed considering both convex and nonconvex systems with constraints. In this decentralized control system, traditional device is fitted with a control chip such that it becomes a smart device. The smart device can communicate and operate collaboratively with the other devices to accomplish some designated tasks. The effectiveness of the presented method is verified by simulations and hardware tests.

  3. Decentralized control of discrete-time linear time invariant systems with input saturation

    NARCIS (Netherlands)

    Deliu, Ciprian; Deliu, C.; Malek, Babak; Roy, Sandip; Saberi, Ali; Stoorvogel, Antonie Arij

    2009-01-01

    We study decentralized stabilization of discrete time linear time invariant (LTI) systems subject to actuator saturation, using LTI controllers. The requirement of stabilization under both saturation constraints and decentralization impose obvious necessary conditions on the open-loop plant, namely

  4. A set of decentralized PID controllers for an n – link robot manipulator

    Indian Academy of Sciences (India)

    The solution of decentralized tracking control problem for robot manipulator is slightly comp- lex since we .... Figure 1 shows decentralized control scheme for the ith joint of system (10). ...... Automatic Control 49(11): 2081–2084. Gahinet P ...

  5. Enforcement and Environmental Quality in a Decentralized Emission Trading System

    Energy Technology Data Exchange (ETDEWEB)

    D' Amato, Alessio (Univ. of Rome, ' Tor Vergata' , Rome (Italy)); Valentini, Edilio (Univ. G. D' Annunzio di Chieti-Pescara, DEST, Fac. di Economia, Pescara (Italy))

    2008-07-01

    This paper addresses the issue of whether the powers of monitoring compliance and allocating allowances under emissions trading within an economic union should be centralized or delegated to single states. To this end, we develop a two stage game played by two governments choosing allowances and monitoring effort to achieve full compliance, and their respective polluting industries. We show that cost advantage in favor of national states is not sufficient to justify decentralization. Nevertheless, cost differential in monitoring violations can imply lower emissions and greater welfare under a decentralized institutional setting than under a centralized one

  6. Staffing, qualification and organization for centralized and decentralized training

    International Nuclear Information System (INIS)

    Holyoak, R.H.

    1985-01-01

    This paper covers an extensive area. First a brief history of the training at Commonwealth Edison is presented so that the reader can get some idea of why some of the problems mentioned exist. Next is a discussion of the centralized and decentralized Commonwealth Edison production training organization. A brief review of the development of the Instructor Qualification Program and the training of instructors follows. Finally, a review of the problems and some solutions related to managing a centralized/decentralized training system is included

  7. Workflow Lexicons in Healthcare: Validation of the SWIM Lexicon.

    Science.gov (United States)

    Meenan, Chris; Erickson, Bradley; Knight, Nancy; Fossett, Jewel; Olsen, Elizabeth; Mohod, Prerna; Chen, Joseph; Langer, Steve G

    2017-06-01

    For clinical departments seeking to successfully navigate the challenges of modern health reform, obtaining access to operational and clinical data to establish and sustain goals for improving quality is essential. More broadly, health delivery organizations are also seeking to understand performance across multiple facilities and often across multiple electronic medical record (EMR) systems. Interpreting operational data across multiple vendor systems can be challenging, as various manufacturers may describe different departmental workflow steps in different ways and sometimes even within a single vendor's installed customer base. In 2012, The Society for Imaging Informatics in Medicine (SIIM) recognized the need for better quality and performance data standards and formed SIIM's Workflow Initiative for Medicine (SWIM), an initiative designed to consistently describe workflow steps in radiology departments as well as defining operational quality metrics. The SWIM lexicon was published as a working model to describe operational workflow steps and quality measures. We measured the prevalence of the SWIM lexicon workflow steps in both academic and community radiology environments using real-world patient observations and correlated that information with automatically captured workflow steps from our clinical information systems. Our goal was to measure frequency of occurrence of workflow steps identified by the SWIM lexicon in a real-world clinical setting, as well as to correlate how accurately departmental information systems captured patient flow through our health facility.

  8. Examining daily activity routines of older adults using workflow.

    Science.gov (United States)

    Chung, Jane; Ozkaynak, Mustafa; Demiris, George

    2017-07-01

    We evaluated the value of workflow analysis supported by a novel visualization technique to better understand the daily routines of older adults and highlight their patterns of daily activities and normal variability in physical functions. We used a self-reported activity diary to obtain data from six community-dwelling older adults for 14 consecutive days. Workflow for daily routine was analyzed using the EventFlow tool, which aggregates workflow information to highlight patterns and variabilities. A total of 1453 events were included in the data analysis. To demonstrate the patterns and variability of each individual's daily activities, participant activity workflows were visualized and compared. The workflow analysis revealed great variability in activity types, regularity, frequency, duration, and timing of performing certain activities across individuals. Also, when workflow approach was applied to spatial information of activities, the analysis revealed the ability to provide meaningful data on individuals' mobility in different levels of life spaces from home to community. Results suggest that using workflows to characterize the daily activities of older adults will be helpful for clinicians and researchers in understanding their daily routines and preparing education and prevention strategies tailored to each individual's activity level. This tool also has the potential to be integrated into consumer informatics technologies, such as patient portals or personal health records, so that consumers may be encouraged to become actively involved in monitoring and managing their health. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A Tool Supporting Collaborative Data Analytics Workflow Design and Management

    Science.gov (United States)

    Zhang, J.; Bao, Q.; Lee, T. J.

    2016-12-01

    Collaborative experiment design could significantly enhance the sharing and adoption of the data analytics algorithms and models emerged in Earth science. Existing data-oriented workflow tools, however, are not suitable to support collaborative design of such a workflow, to name a few, to support real-time co-design; to track how a workflow evolves over time based on changing designs contributed by multiple Earth scientists; and to capture and retrieve collaboration knowledge on workflow design (discussions that lead to a design). To address the aforementioned challenges, we have designed and developed a technique supporting collaborative data-oriented workflow composition and management, as a key component toward supporting big data collaboration through the Internet. Reproducibility and scalability are two major targets demanding fundamental infrastructural support. One outcome of the project os a software tool, supporting an elastic number of groups of Earth scientists to collaboratively design and compose data analytics workflows through the Internet. Instead of recreating the wheel, we have extended an existing workflow tool VisTrails into an online collaborative environment as a proof of concept.

  10. Deploying and sharing U-Compare workflows as web services.

    Science.gov (United States)

    Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia

    2013-02-18

    U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.

  11. NASA Schedule Management Handbook

    Science.gov (United States)

    2011-01-01

    The purpose of schedule management is to provide the framework for time-phasing, resource planning, coordination, and communicating the necessary tasks within a work effort. The intent is to improve schedule management by providing recommended concepts, processes, and techniques used within the Agency and private industry. The intended function of this handbook is two-fold: first, to provide guidance for meeting the scheduling requirements contained in NPR 7120.5, NASA Space Flight Program and Project Management Requirements, NPR 7120.7, NASA Information Technology and Institutional Infrastructure Program and Project Requirements, NPR 7120.8, NASA Research and Technology Program and Project Management Requirements, and NPD 1000.5, Policy for NASA Acquisition. The second function is to describe the schedule management approach and the recommended best practices for carrying out this project control function. With regards to the above project management requirements documents, it should be noted that those space flight projects previously established and approved under the guidance of prior versions of NPR 7120.5 will continue to comply with those requirements until project completion has been achieved. This handbook will be updated as needed, to enhance efficient and effective schedule management across the Agency. It is acknowledged that most, if not all, external organizations participating in NASA programs/projects will have their own internal schedule management documents. Issues that arise from conflicting schedule guidance will be resolved on a case by case basis as contracts and partnering relationships are established. It is also acknowledged and understood that all projects are not the same and may require different levels of schedule visibility, scrutiny and control. Project type, value, and complexity are factors that typically dictate which schedule management practices should be employed.

  12. Distributed Workflow Service Composition Based on CTR Technology

    Science.gov (United States)

    Feng, Zhilin; Ye, Yanming

    Recently, WS-BPEL has gradually become the basis of a standard for web service description and composition. However, WS-BPEL cannot efficiently describe distributed workflow services for lacking of special expressive power and formal semantics. This paper presents a novel method for modeling distributed workflow service composition with Concurrent TRansaction logic (CTR). The syntactic structure of WS-BPEL and CTR are analyzed, and new rules of mapping WS-BPEL into CTR are given. A case study is put forward to show that the proposed method is appropriate for modeling workflow business services under distributed environments.

  13. CMS Alignement and Calibration workflows: lesson learned and future plans

    CERN Document Server

    AUTHOR|(CDS)2069172

    2014-01-01

    We review the online and offline workflows designed to align and calibrate the CMS detector. Starting from the gained experience during the first LHC run, we discuss the expected developments for Run II. In particular, we describe the envisioned different stages, from the alignment using cosmic rays data to the detector alignment and calibration using the first proton-proton collisions data ( O(100 pb-1) ) and a larger dataset ( O(1 fb-1) ) to reach the target precision. The automatisation of the workflow and the integration in the online and offline activity (dedicated triggers and datasets, data skims, workflows to compute the calibration and alignment constants) are discussed.

  14. What is needed for effective open access workflows?

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Institutions and funders are pushing forward open access with ever new guidelines and policies. Since institutional repositories are important maintainers of green open access, they should support easy and fast workflows for researchers and libraries to release publications. Based on the requirements specification of researchers, libraries and publishers, possible supporting software extensions are discussed. How does a typical workflow look like? What has to be considered by the researchers and by the editors in the library before releasing a green open access publication? Where and how can software support and improve existing workflows?

  15. EDMS based workflow for Printing Industry

    Directory of Open Access Journals (Sweden)

    Prathap Nayak

    2013-04-01

    Full Text Available Information is indispensable factor of any enterprise. It can be a record or a document generated for every transaction that is made, which is either a paper based or in electronic format for future reference. A Printing Industry is one such industry in which managing information of various formats, with latest workflows and technologies, could be a nightmare and a challenge for any operator or an user when each process from the least bit of information to a printed product are always dependendent on each other. Hence the information has to be harmonized artistically in order to avoid production downtime or employees pointing fingers at each other. This paper analyses how the implementation of Electronic Document Management System (EDMS could contribute to the Printing Industry for immediate access to stored documents within and across departments irrespective of geographical boundaries. The paper outlines initially with a brief history, contemporary EDMS system and some illustrated examples with a study done by choosing Library as a pilot area for evaluating EDMS. The paper ends with an imitative proposal that maps several document management based activities for implementation of EDMS for a Printing Industry.

  16. Resilient workflows for computational mechanics platforms

    International Nuclear Information System (INIS)

    Nguyen, Toan; Trifan, Laurentiu; Desideri, Jean-Antoine

    2010-01-01

    Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future.

  17. Resilient workflows for computational mechanics platforms

    Science.gov (United States)

    Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine

    2010-06-01

    Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].

  18. Fuzzy-Logic-Based Gain-Scheduling Control for State-of-Charge Balance of Distributed Energy Storage Systems for DC Microgrids

    DEFF Research Database (Denmark)

    Aldana, Nelson Leonardo Diaz; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2014-01-01

    -charge or deep-discharge in one of the energy storage units. Primary control in a microgrid is responsible for power sharing among units; and droop control is typically used in this stage. This paper proposes a modular and decentralized gain-scheduling control strategy based on fuzzy logic that ensures balanced...

  19. Taking advantage of HTML5 browsers to realize the concepts of session state and workflow sharing in web-tool applications

    Science.gov (United States)

    Suftin, I.; Read, J. S.; Walker, J.

    2013-12-01

    Scientists prefer not having to be tied down to a specific machine or operating system in order to analyze local and remote data sets or publish work. Increasingly, analysis has been migrating to decentralized web services and data sets, using web clients to provide the analysis interface. While simplifying workflow access, analysis, and publishing of data, the move does bring with it its own unique set of issues. Web clients used for analysis typically offer workflows geared towards a single user, with steps and results that are often difficult to recreate and share with others. Furthermore, workflow results often may not be easily used as input for further analysis. Older browsers further complicate things by having no way to maintain larger chunks of information, often offloading the job of storage to the back-end server or trying to squeeze it into a cookie. It has been difficult to provide a concept of "session storage" or "workflow sharing" without a complex orchestration of the back-end for storage depending on either a centralized file system or database. With the advent of HTML5, browsers gained the ability to store more information through the use of the Web Storage API (a browser-cookie holds a maximum of 4 kilobytes). Web Storage gives us the ability to store megabytes of arbitrary data in-browser either with an expiration date or just for a session. This allows scientists to create, update, persist and share their workflow without depending on the backend to store session information, providing the flexibility for new web-based workflows to emerge. In the DSASWeb portal ( http://cida.usgs.gov/DSASweb/ ), using these techniques, the representation of every step in the analyst's workflow is stored as plain-text serialized JSON, which we can generate as a text file and provide to the analyst as an upload. This file may then be shared with others and loaded back into the application, restoring the application to the state it was in when the session file

  20. Real-time Energy Resource Scheduling considering a Real Portuguese Scenario

    DEFF Research Database (Denmark)

    Silva, Marco; Sousa, Tiago; Morais, Hugo

    2014-01-01

    The development in power systems and the introduction of decentralized gen eration and Electric Vehicles (EVs), both connected to distribution networks, represents a major challenge in the planning and operation issues. This new paradigm requires a new energy resources management approach which...... scheduling in smart grids, considering day - ahead, hour - ahead and real - time scheduling. The case study considers a 33 - bus distribution network with high penetration of distributed energy resources . The wind generation profile is base d o n a rea l Portuguese wind farm . Four scenarios are presented...... taking into account 0, 1, 2 and 5 periods (hours or minutes) ahead of the scheduling period in the hour - ahead and real - time scheduling...

  1. Physician Fee Schedule Search

    Data.gov (United States)

    U.S. Department of Health & Human Services — This website is designed to provide information on services covered by the Medicare Physician Fee Schedule (MPFS). It provides more than 10,000 physician services,...

  2. Clinical Laboratory Fee Schedule

    Data.gov (United States)

    U.S. Department of Health & Human Services — Outpatient clinical laboratory services are paid based on a fee schedule in accordance with Section 1833(h) of the Social Security Act. The clinical laboratory fee...

  3. CERN confirms LHC schedule

    CERN Document Server

    2003-01-01

    The CERN Council held its 125th session on 20 June. Highlights of the meeting included confirmation that the LHC is on schedule for a 2007 start-up, and the announcement of a new organizational structure in 2004.

  4. DMEPOS Fee Schedule

    Data.gov (United States)

    U.S. Department of Health & Human Services — The list contains the fee schedule amounts, floors, and ceilings for all procedure codes and payment category, jurisdication, and short description assigned to each...

  5. Project Schedule Simulation

    DEFF Research Database (Denmark)

    Mizouni, Rabeb; Lazarova-Molnar, Sanja

    2015-01-01

    overrun both their budget and time. To improve the quality of initial project plans, we show in this paper the importance of (1) reflecting features’ priorities/risk in task schedules and (2) considering uncertainties related to human factors in plan schedules. To make simulation tasks reflect features......’ priority as well as multimodal team allocation, enhanced project schedules (EPS), where remedial actions scenarios (RAS) are added, were introduced. They reflect potential schedule modifications in case of uncertainties and promote a dynamic sequencing of involved tasks rather than the static conventional...... this document as an instruction set. The electronic file of your paper will be formatted further at Journal of Software. Define all symbols used in the abstract. Do not cite references in the abstract. Do not delete the blank line immediately above the abstract; it sets the footnote at the bottom of this column....

  6. Reactor outage schedule (tentative)

    Energy Technology Data Exchange (ETDEWEB)

    Walton, R.P.

    1969-11-01

    This single page document is the November 1, 1969 reactor refueling outage schedule for the Hanford Production Reactor. It also contains data on the amounts and types of fuels to be loaded and relocated in the production reactor.

  7. Reactor outage schedule (tentative)

    Energy Technology Data Exchange (ETDEWEB)

    Walton, R.P.

    1969-10-01

    This single page document is the October 1, 1969 reactor refueling outage schedule for the Hanford Production Reactor. It also contains data on the amounts and types of fuels to be loaded and relocated in the Production Reactor.

  8. Reactor outage schedule (tentative)

    Energy Technology Data Exchange (ETDEWEB)

    Walton, R.P.

    1969-10-15

    This single page document is the October 15, 1969 reactor refueling outage schedule for the Hanford Production Reactor. It also contains data on the amounts and types of fuels to be loaded and relocated in the Production Reactor.

  9. Reactor outage schedule (tentative)

    Energy Technology Data Exchange (ETDEWEB)

    Walton, R.P.

    1969-09-15

    This single page document is the September 15, 1969 reactor refueling outage schedule for the Hanford Production Reactor. It also contains data on the amounts and types of fuels to be loaded and relocated in the Production Reactor.

  10. Reactor outage schedule (tentative)

    Energy Technology Data Exchange (ETDEWEB)

    Walton, R.P.

    1969-12-15

    This single page document is the December 16, 1969 reactor refueling outage schedule for the Hanford Production Reactor. It also contains data on the amounts and types of fuels to be loaded and relocated in the Production reactor.

  11. Reactor outage schedule (tentative)

    Energy Technology Data Exchange (ETDEWEB)

    Walton, R.P.

    1969-12-01

    This single page document is the December 1, 1969 reactor refueling outage schedule for the Hanford Production Reactor. It also contains data on the amounts and types of fuels to be loaded and relocated in the Production reactor.

  12. Fee Schedules - General Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — A fee schedule is a complete listing of fees used by Medicare to pay doctors or other providers-suppliers. This comprehensive listing of fee maximums is used to...

  13. CMS Records Schedule

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Records Schedule provides disposition authorizations approved by the National Archives and Records Administration (NARA) for CMS program-related records...

  14. Decentralization for National development in Nigeria from a ...

    African Journals Online (AJOL)

    National development of a multicultural setting requires a decentralized appropriation of diverse contributions of various constituent subsets. Improvement of multicultural settings calls for social negotiation and economic merger, and compression of individual resources of the various units to enhance egalitarian level of ...

  15. DECENTRALIZATION OF MUNICIPAL SERVICES – LEARNING BY DOING

    Directory of Open Access Journals (Sweden)

    Cristina Elena NICOLESCU

    2017-05-01

    Full Text Available Public services decentralization is a major concern for policy makers when it comes to identifying the optimum model for reorganizing these services, in light of the 3Es of the organizational performance. The field experiences show that this process is different both from one state to another, and depending on the targeted activity sector, out of which the local transport service is distinguished as an ‘institutional orphan’. Taking into account one of the smart-cities’ recognition criteria, the urban mobility, the paper aims at substantiating that, despite the specific incrementalism of the public services decentralization, having a negative impact upon the services’ efficiency, in the case of local transport service, recognizing the right to mobility and the need to ensuring the environment for exercising this right, impels the ‘bureaucratic apparatus’ to accelerate and consolidate the decentralization of this service. Therefore, the paper puts forward a case study on the impact of decentralization upon the local public transport service of Bucharest municipality.

  16. Decentralization of operating reactor licensing reviews: NRR Pilot Program

    International Nuclear Information System (INIS)

    Hannon, J.N.

    1984-07-01

    This report, which has incorporated comments received from the Commission and ACRS, describes the program for decentralization of selected operating reactor licensing technical review activities. The 2-year pilot program will be reviewed to verify that safety is enhanced as anticipated by the incorporation of prescribed management techniques and application of resources. If the program fails to operate as designed, it will be terminated

  17. Integrating Collaborative and Decentralized Models to Support Ubiquitous Learning

    Science.gov (United States)

    Barbosa, Jorge Luis Victória; Barbosa, Débora Nice Ferrari; Rigo, Sandro José; de Oliveira, Jezer Machado; Rabello, Solon Andrade, Jr.

    2014-01-01

    The application of ubiquitous technologies in the improvement of education strategies is called Ubiquitous Learning. This article proposes the integration between two models dedicated to support ubiquitous learning environments, called Global and CoolEdu. CoolEdu is a generic collaboration model for decentralized environments. Global is an…

  18. Beyond the Diversity Crisis Model: Decentralized Diversity Planning and Implementation

    Science.gov (United States)

    Williams, Damon A.

    2008-01-01

    This article critiques the diversity crises model of diversity planning in higher education and presents a decentralized diversity planning model. The model is based on interviews with the nation's leading diversity officers, a review of the literature and the authors own experiences leading diversity change initiatives in higher education. The…

  19. Decentralized Urban Solid Waste Management in Indonesia | CRDI ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Urban areas of Indonesia generate about 55 000 tonnes of solid waste per day, ... four models of decentralized solid waste management in low-income urban ... En partenariat avec l'Organization for Women in Science for the Developing ...

  20. Dynamical Orders of Decentralized H-infinity Controllers

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1996-01-01

    The problem of decentralized control is addressed, i.e. theproblem of designing a controller where each control input is allowedto use only some of the measurements. It is shown that such problemsthere does not always exist a sequence of controllers of bounded orderwhich obtains near optimal cont...

  1. Dynamical orders of decentralized H-infinity controllers

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.H.

    1999-01-01

    The problem of decentralized control is addressed, i.e. the problem of designing a controller where each control input is allowed to use only some of the measurements. It is shown that, for such problems, there does not always exist a sequence of controllers of bounded order which obtains near-op...

  2. Contesting sharia : state law, decentralization and Minangkabau custom

    NARCIS (Netherlands)

    Huda, Yasrul

    2013-01-01

    This book explains how Sharia, commonly called Perda Sharia (Sharia by-law) in Indonesia, was legislated on the provincial, regional and municipal level in West Sumatra. This process began after the government started a decentralization policy in 2000. Although the law of local autonomy prescribes

  3. A simulation model of a coordinated decentralized linear supply chain

    NARCIS (Netherlands)

    Ashayeri, Jalal; Cannella, S.; Lopez Campos, M.; Miranda, P.A.

    2015-01-01

    This paper presents a simulation-based study of a coordinated, decentralized linear supply chain (SC) system. In the proposed model, any supply tier considers its successors as part of its inventory system and generates replenishment orders on the basis of its partners’ operational information. We

  4. Decentralized Heat Supply – Alternative to Centralized One

    OpenAIRE

    V. I. Nazarov; L. A. Tarasevich; А. L. Burov

    2012-01-01

    The paper presents a concrete example that shows comparative characteristics of decentralized and centralized heat supply. It has been shown in the paper that selection of this or that variant of heat supply significantly depends on losses in heat supply networks.

  5. On the efficiency of decentralized exchange with resale possibilities

    DEFF Research Database (Denmark)

    Groes, Ebbe; Tranæs, Torben

    1999-01-01

    . If resale is possible and transaction costs are negligible, we would nevertheless expect an efficient allocation to result from decentralized exchange. This paper suggests that this depends on the nature of the commodity; while the allocation of a durable good will be efficient, the allocation...

  6. Decentralized Receding Horizon Control and Coordination of Autonomous Vehicle Formations

    NARCIS (Netherlands)

    Keviczky, T.; Borelli, F.; Fregene, K.; Godbole, D.; Bals, G.J.

    2008-01-01

    This paper describes the application of a novel methodology for high-level control and coordination of autonomous vehicle teams and its demonstration on high-fidelity models of the organic air vehicle developed at Honeywell Laboratories. The scheme employs decentralized receding horizon controllers

  7. Spectrum Allocation for Decentralized Transmission Strategies: Properties of Nash Equilibria

    Directory of Open Access Journals (Sweden)

    Peter von Wrycza

    2009-01-01

    Full Text Available The interaction of two transmit-receive pairs coexisting in the same area and communicating using the same portion of the spectrum is analyzed from a game theoretic perspective. Each pair utilizes a decentralized iterative water-filling scheme to greedily maximize the individual rate. We study the dynamics of such a game and find properties of the resulting Nash equilibria. The region of achievable operating points is characterized for both low- and high-interference systems, and the dependence on the various system parameters is explicitly shown. We derive the region of possible signal space partitioning for the iterative water-filling scheme and show how the individual utility functions can be modified to alter its range. Utilizing global system knowledge, we design a modified game encouraging better operating points in terms of sum rate compared to those obtained using the iterative water-filling algorithm and show how such a game can be imitated in a decentralized noncooperative setting. Although we restrict the analysis to a two player game, analogous concepts can be used to design decentralized algorithms for scenarios with more players. The performance of the modified decentralized game is evaluated and compared to the iterative water-filling algorithm by numerical simulations.

  8. Users' perspectives on decentralized rural water services in Tanzania

    NARCIS (Netherlands)

    Masanyiwa, Z.S.; Niehof, A.; Termeer, C.J.A.M.

    2015-01-01

    This article examines the impact of decentralization reforms on improving access to domestic water supply in the rural districts of Kondoa and Kongwa, Tanzania, using a users' and a gender perspective. The article addresses the question whether and to what extent the delivery of gender-sensitive

  9. Independence and Collaboration; Why We Should Decentralize Writing Centers.

    Science.gov (United States)

    Smith, Louise Z.

    1986-01-01

    Notes the inevitable tensions that arise between centripetal writing centers and centrifugal writing across the curriculum programs. Examines the tutoring program at an eastern university as an example of a decentralized writing center that resists pressures to assume a uniform composition pedagogy and coordinates its work with many parts of the…

  10. Reaction Diffusion and Chemotaxis for Decentralized Gathering on FPGAs

    Directory of Open Access Journals (Sweden)

    Bernard Girau

    2009-01-01

    and rapid simulations of the complex dynamics of this reaction-diffusion model. Then we describe the FPGA implementation of the environment together with the agents, to study the major challenges that must be solved when designing a fast embedded implementation of the decentralized gathering model. We analyze the results according to the different goals of these hardware implementations.

  11. The Relationship Between Traffic Stability and Capacity for Decentralized Airspace

    NARCIS (Netherlands)

    Sunil, E.; Maas, J.B.; Ellerbroek, J.; Hoekstra, J.M.; Tra, M.A.P.

    2016-01-01

    The work that is presented in this paper is part of an ongoing study on the relationship between structure and capacity of decentralized airspace concepts. In this paper, the effect of traffic stability, which considers the occurrence of conflict chain reactions as a result of conflict resolution

  12. Strategic Alignment: Recruiting Students in a Highly Decentralized Environment

    Science.gov (United States)

    Levin, Richard

    2016-01-01

    All enrollment managers face some level of challenge related to decentralized decision making and operations. Policies and practices can vary considerably by academic area, creating administrative complexity, restricting the scope and speed of institutional initiatives, and limiting potential efficiencies. Central attempts to standardize or…

  13. A Logic for Auditing Accountability in Decentralized Systems

    NARCIS (Netherlands)

    Corin, R.J.; Etalle, Sandro; den Hartog, Jeremy; Lenzini, Gabriele; Staicu, I.

    We propose a language that allows agents to distribute data with usage policies in a decentralized architecture. In our framework, the compliance with usage policies is not enforced. However, agents may be audited by an authority at an arbitrary moment in time. We design a logic that allows audited

  14. Centralization and Decentralization of Schools' Physical Facilities Management in Nigeria

    Science.gov (United States)

    Ikoya, Peter O.

    2008-01-01

    Purpose: This research aims to examine the difference in the availability, adequacy and functionality of physical facilities in centralized and decentralized schools districts, with a view to making appropriate recommendations to stakeholders on the reform programmes in the Nigerian education sector. Design/methodology/approach: Principals,…

  15. Decentralized Heat Supply – Alternative to Centralized One

    Directory of Open Access Journals (Sweden)

    V. I. Nazarov

    2012-01-01

    Full Text Available The paper presents a concrete example that shows comparative characteristics of decentralized and centralized heat supply. It has been shown in the paper that selection of this or that variant of heat supply significantly depends on losses in heat supply networks.

  16. Centralized vs. De-centralized Multinationals and Taxes

    DEFF Research Database (Denmark)

    Nielsen, Søren Bo; Raimondos-Møller, Pascalis; Schjelderup, Guttorm

    2005-01-01

    The paper examines how country tax differences affect a multinational enterprise's choice to centralize or de-centralize its decision structure. Within a simple model that emphasizes the multiple conflicting roles of transfer prices in MNEs - here, as a strategic pre-commitment device and a tax...

  17. Decentralized School vs. Centralized School. Investigation No. 3.

    Science.gov (United States)

    Paseur, C. Herbert

    A report is presented of a comparative investigation of a decentralized and a centralized school facility. Comparative data are provided regarding costs of the facilities, amount of educational area provided by the facilities, and types of educational areas provided. Evaluative comments are included regarding cost savings versus educational…

  18. The influence of decentralization on effectiveness of extension ...

    African Journals Online (AJOL)

    Against the background of frequent organisational changes and restructuring, often based on impulsive decisions rather than structured feasibility studies or evaluations, this article examines the influence of decentralization on the performance of an extension organization. Based on a survey of 353 respondents from ...

  19. ATLAS construction schedule

    CERN Multimedia

    Kotamaki, M

    The goal during the last few months has been to freeze and baseline as much as possible the schedules of various ATLAS systems and activities. The main motivations for the re-baselining of the schedules have been the new LHC schedule aiming at first collisions in early 2006 and the encountered delays in civil engineering as well as in the production of some of the detectors. The process was started by first preparing a new installation schedule that takes into account all the new external constraints and the new ATLAS staging scenario. The installation schedule version 3 was approved in the March EB and it provides the Ready For Installation (RFI) milestones for each system, i.e. the date when the system should be available for the start of the installation. TCn is now interacting with the systems aiming at a more realistic and resource loaded version 4 before the end of the year. Using the new RFI milestones as driving dates a new summary schedule has been prepared, or is under preparation, for each system....

  20. A standard-enabled workflow for synthetic biology

    KAUST Repository

    Myers, Chris J.

    2017-06-15

    A synthetic biology workflow is composed of data repositories that provide information about genetic parts, sequence-level design tools to compose these parts into circuits, visualization tools to depict these designs, genetic design tools to select parts to create systems, and modeling and simulation tools to evaluate alternative design choices. Data standards enable the ready exchange of information within such a workflow, allowing repositories and tools to be connected from a diversity of sources. The present paper describes one such workflow that utilizes, among others, the Synthetic Biology Open Language (SBOL) to describe genetic designs, the Systems Biology Markup Language to model these designs, and SBOL Visual to visualize these designs. We describe how a standard-enabled workflow can be used to produce types of design information, including multiple repositories and software tools exchanging information using a variety of data standards. Recently, the ACS Synthetic Biology journal has recommended the use of SBOL in their publications.