WorldWideScience

Sample records for scheduling independent parallel

  1. Preemptive scheduling of independent jobs on identical parallel machines subject to migration delays

    NARCIS (Netherlands)

    Fishkin, A.V.; Jansen, K.; Sevastyanov, S.V.; Sitters, R.A.; Leonardi, S.

    2005-01-01

    We present hardness and approximation results for the problem of scheduling n independent jobs on m identical parallel machines subject to a migration delay d so as to minimize the makespan. We give a sharp threshold on the value of d for which the complexity of the problem changes from polynomial

  2. Preemptive scheduling of independent jobs on identical parallel machines subject to migration delays

    NARCIS (Netherlands)

    Sevastyanov, S. V.; Sitters, R. A.; Fishkin, A.V.

    2010-01-01

    We present hardness and approximation results for the problem of preemptive scheduling of n independent jobs on m identical parallel machines subject to a migration delay d with the objective to minimize the makespan. We give a sharp threshold on the value of d for which the complexity of the

  3. Duality-based algorithms for scheduling on unrelated parallel machines

    NARCIS (Netherlands)

    van de Velde, S.L.; van de Velde, S.L.

    1993-01-01

    We consider the following parallel machine scheduling problem. Each of n independent jobs has to be scheduled on one of m unrelated parallel machines. The processing of job J[sub l] on machine Mi requires an uninterrupted period of positive length p[sub lj]. The objective is to find an assignment of

  4. Scheduling Parallel Jobs Using Migration and Consolidation in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiaocheng Liu

    2012-01-01

    Full Text Available An increasing number of high performance computing parallel applications leverages the power of the cloud for parallel processing. How to schedule the parallel applications to improve the quality of service is the key to the successful host of parallel applications in the cloud. The large scale of the cloud makes the parallel job scheduling more complicated as even simple parallel job scheduling problem is NP-complete. In this paper, we propose a parallel job scheduling algorithm named MEASY. MEASY adopts migration and consolidation to enhance the most popular EASY scheduling algorithm. Our extensive experiments on well-known workloads show that our algorithm takes very good care of the quality of service. For two common parallel job scheduling objectives, our algorithm produces an up to 41.1% and an average of 23.1% improvement on the average response time; an up to 82.9% and an average of 69.3% improvement on the average slowdown. Our algorithm is robust even in terms that it allows inaccurate CPU usage estimation and high migration cost. Our approach involves trivial modification on EASY and requires no additional technique; it is practical and effective in the cloud environment.

  5. PERFORMANCE ANALYSIS BETWEEN EXPLICIT SCHEDULING AND IMPLICIT SCHEDULING OF PARALLEL ARRAY-BASED DOMAIN DECOMPOSITION USING OPENMP

    Directory of Open Access Journals (Sweden)

    MOHAMMED FAIZ ABOALMAALY

    2014-10-01

    Full Text Available With the continuous revolution of multicore architecture, several parallel programming platforms have been introduced in order to pave the way for fast and efficient development of parallel algorithms. Back into its categories, parallel computing can be done through two forms: Data-Level Parallelism (DLP or Task-Level Parallelism (TLP. The former can be done by the distribution of data among the available processing elements while the latter is based on executing independent tasks concurrently. Most of the parallel programming platforms have built-in techniques to distribute the data among processors, these techniques are technically known as automatic distribution (scheduling. However, due to their wide range of purposes, variation of data types, amount of distributed data, possibility of extra computational overhead and other hardware-dependent factors, manual distribution could achieve better outcomes in terms of performance when compared to the automatic distribution. In this paper, this assumption is investigated by conducting a comparison between automatic and our newly proposed manual distribution of data among threads in parallel. Empirical results of matrix addition and matrix multiplication show a considerable performance gain when manual distribution is applied against automatic distribution.

  6. Integrated Production-Distribution Scheduling Problem with Multiple Independent Manufacturers

    Directory of Open Access Journals (Sweden)

    Jianhong Hao

    2015-01-01

    Full Text Available We consider the nonstandard parts supply chain with a public service platform for machinery integration in China. The platform assigns orders placed by a machinery enterprise to multiple independent manufacturers who produce nonstandard parts and makes production schedule and batch delivery schedule for each manufacturer in a coordinate manner. Each manufacturer has only one plant with parallel machines and is located at a location far away from other manufacturers. Orders are first processed at the plants and then directly shipped from the plants to the enterprise in order to be finished before a given deadline. We study the above integrated production-distribution scheduling problem with multiple manufacturers to maximize a weight sum of the profit of each manufacturer under the constraints that all orders are finished before the deadline and the profit of each manufacturer is not negative. According to the optimal condition analysis, we formulate the problem as a mixed integer programming model and use CPLEX to solve it.

  7. Parallelization and scheduling of data intensive particle physics analysis jobs on clusters of PCs

    CERN Document Server

    Ponce, S

    2004-01-01

    Summary form only given. Scheduling policies are proposed for parallelizing data intensive particle physics analysis applications on computer clusters. Particle physics analysis jobs require the analysis of tens of thousands of particle collision events, each event requiring typically 200ms processing time and 600KB of data. Many jobs are launched concurrently by a large number of physicists. At a first view, particle physics jobs seem to be easy to parallelize, since particle collision events can be processed independently one from another. However, since large amounts of data need to be accessed, the real challenge resides in making an efficient use of the underlying computing resources. We propose several job parallelization and scheduling policies aiming at reducing job processing times and at increasing the sustainable load of a cluster server. Since particle collision events are usually reused by several jobs, cache based job splitting strategies considerably increase cluster utilization and reduce job ...

  8. On program restructuring, scheduling, and communication for parallel processor systems

    Energy Technology Data Exchange (ETDEWEB)

    Polychronopoulos, Constantine D. [Univ. of Illinois, Urbana, IL (United States)

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, these algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.

  9. Parallel genetic algorithms with migration for the hybrid flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    K. Belkadi

    2006-01-01

    Full Text Available This paper addresses scheduling problems in hybrid flow shop-like systems with a migration parallel genetic algorithm (PGA_MIG. This parallel genetic algorithm model allows genetic diversity by the application of selection and reproduction mechanisms nearer to nature. The space structure of the population is modified by dividing it into disjoined subpopulations. From time to time, individuals are exchanged between the different subpopulations (migration. Influence of parameters and dedicated strategies are studied. These parameters are the number of independent subpopulations, the interconnection topology between subpopulations, the choice/replacement strategy of the migrant individuals, and the migration frequency. A comparison between the sequential and parallel version of genetic algorithm (GA is provided. This comparison relates to the quality of the solution and the execution time of the two versions. The efficiency of the parallel model highly depends on the parameters and especially on the migration frequency. In the same way this parallel model gives a significant improvement of computational time if it is implemented on a parallel architecture which offers an acceptable number of processors (as many processors as subpopulations.

  10. DIMACS Workshop on Interconnection Networks and Mapping, and Scheduling Parallel Computations

    CERN Document Server

    Rosenberg, Arnold L; Sotteau, Dominique; NSF Science and Technology Center in Discrete Mathematics and Theoretical Computer Science; Interconnection networks and mapping and scheduling parallel computations

    1995-01-01

    The interconnection network is one of the most basic components of a massively parallel computer system. Such systems consist of hundreds or thousands of processors interconnected to work cooperatively on computations. One of the central problems in parallel computing is the task of mapping a collection of processes onto the processors and routing network of a parallel machine. Once this mapping is done, it is critical to schedule computations within and communication among processor from universities and laboratories, as well as practitioners involved in the design, implementation, and application of massively parallel systems. Focusing on interconnection networks of parallel architectures of today and of the near future , the book includes topics such as network topologies,network properties, message routing, network embeddings, network emulation, mappings, and efficient scheduling. inputs for a process are available where and when the process is scheduled to be computed. This book contains the refereed pro...

  11. Online Algorithms for Parallel Job Scheduling and Strip Packing

    NARCIS (Netherlands)

    Hurink, Johann L.; Paulus, J.J.

    We consider the online scheduling problem of parallel jobs on parallel machines, $P|online{−}list,m_j |C_{max}$. For this problem we present a 6.6623-competitive algorithm. This improves the best known 7-competitive algorithm for this problem. The presented algorithm also applies to the problem

  12. An Integrated Approach to Locality-Conscious Processor Allocation and Scheduling of Mixed-Parallel Applications

    Energy Technology Data Exchange (ETDEWEB)

    Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.; Catalyurek, Umit V.; Kurc, Tahsin; Sadayappan, Ponnuswamy; Saltz, Joel H.

    2009-08-01

    Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisions are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.

  13. A new scheduling algorithm for parallel sparse LU factorization with static pivoting

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2002-08-20

    In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.

  14. Options for Parallelizing a Planning and Scheduling Algorithm

    Science.gov (United States)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.

    2011-01-01

    Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.

  15. Parallel-aware, dedicated job co-scheduling within/across symmetric multiprocessing nodes

    Science.gov (United States)

    Jones, Terry R.; Watson, Pythagoras C.; Tuel, William; Brenner, Larry; ,Caffrey, Patrick; Fier, Jeffrey

    2010-10-05

    In a parallel computing environment comprising a network of SMP nodes each having at least one processor, a parallel-aware co-scheduling method and system for improving the performance and scalability of a dedicated parallel job having synchronizing collective operations. The method and system uses a global co-scheduler and an operating system kernel dispatcher adapted to coordinate interfering system and daemon activities on a node and across nodes to promote intra-node and inter-node overlap of said interfering system and daemon activities as well as intra-node and inter-node overlap of said synchronizing collective operations. In this manner, the impact of random short-lived interruptions, such as timer-decrement processing and periodic daemon activity, on synchronizing collective operations is minimized on large processor-count SPMD bulk-synchronous programming styles.

  16. A microeconomic scheduler for parallel computers

    Science.gov (United States)

    Stoica, Ion; Abdel-Wahab, Hussein; Pothen, Alex

    1995-01-01

    We describe a scheduler based on the microeconomic paradigm for scheduling on-line a set of parallel jobs in a multiprocessor system. In addition to the classical objectives of increasing the system throughput and reducing the response time, we consider fairness in allocating system resources among the users, and providing the user with control over the relative performances of his jobs. We associate with every user a savings account in which he receives money at a constant rate. When a user wants to run a job, he creates an expense account for that job to which he transfers money from his savings account. The job uses the funds in its expense account to obtain the system resources it needs for execution. The share of the system resources allocated to the user is directly related to the rate at which the user receives money; the rate at which the user transfers money into a job expense account controls the job's performance. We prove that starvation is not possible in our model. Simulation results show that our scheduler improves both system and user performances in comparison with two different variable partitioning policies. It is also shown to be effective in guaranteeing fairness and providing control over the performance of jobs.

  17. Impact of interference on the performance of selection based parallel multiuser scheduling

    KAUST Repository

    Nam, Sungsik

    2012-02-01

    In conventional multiuser parallel scheduling schemes, every scheduled user is interfering with every other scheduled user, which limits the capacity and performance of multiuser systems, and the level of interference becomes substantial as the number of scheduled users increases. Based on the above observations, we investigate the trade-off between the system throughput and the number of scheduled users through the exact analysis of the total average sum rate capacity and the average spectral efficiency. Our analytical results can help the system designer to carefully select the appropriate number of scheduled users to maximize the overall throughput while maintaining an acceptable quality of service under certain channel conditions. © 2012 IEEE.

  18. Parallel Machine Scheduling with Batch Delivery to Two Customers

    Directory of Open Access Journals (Sweden)

    Xueling Zhong

    2015-01-01

    Full Text Available In some make-to-order supply chains, the manufacturer needs to process and deliver products for customers at different locations. To coordinate production and distribution operations at the detailed scheduling level, we study a parallel machine scheduling model with batch delivery to two customers by vehicle routing method. In this model, the supply chain consists of a processing facility with m parallel machines and two customers. A set of jobs containing n1 jobs from customer 1 and n2 jobs from customer 2 are first processed in the processing facility and then delivered to the customers directly without intermediate inventory. The problem is to find a joint schedule of production and distribution such that the tradeoff between maximum arrival time of the jobs and total distribution cost is minimized. The distribution cost of a delivery shipment consists of a fixed charge and a variable cost proportional to the total distance of the route taken by the shipment. We provide polynomial time heuristics with worst-case performance analysis for the problem. If m=2 and (n1-b(n2-b<0, we propose a heuristic with worst-case ratio bound of 3/2, where b is the capacity of the delivery shipment. Otherwise, the worst-case ratio bound of the heuristic we propose is 2-2/(m+1.

  19. Parallel-Machine Scheduling with Time-Dependent and Machine Availability Constraints

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2015-01-01

    Full Text Available We consider the parallel-machine scheduling problem in which the machines have availability constraints and the processing time of each job is simple linear increasing function of its starting times. For the makespan minimization problem, which is NP-hard in the strong sense, we discuss the Longest Deteriorating Rate algorithm and List Scheduling algorithm; we also provide a lower bound of any optimal schedule. For the total completion time minimization problem, we analyze the strong NP-hardness, and we present a dynamic programming algorithm and a fully polynomial time approximation scheme for the two-machine problem. Furthermore, we extended the dynamic programming algorithm to the total weighted completion time minimization problem.

  20. Multi-objective parallel particle swarm optimization for day-ahead Vehicle-to-Grid scheduling

    DEFF Research Database (Denmark)

    Soares, Joao; Vale, Zita; Canizes, Bruno

    2013-01-01

    This paper presents a methodology for multi-objective day-ahead energy resource scheduling for smart grids considering intensive use of distributed generation and Vehicle-To-Grid (V2G). The main focus is the application of weighted Pareto to a multi-objective parallel particle swarm approach aiming...... to solve the dual-objective V2G scheduling: minimizing total operation costs and maximizing V2G income. A realistic mathematical formulation, considering the network constraints and V2G charging and discharging efficiencies is presented and parallel computing is applied to the Pareto weights. AC power flow...

  1. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  2. Parallel Branch-and-Bound Methods for the Job Shop Scheduling

    DEFF Research Database (Denmark)

    Clausen, Jens; Perregaard, Michael

    1998-01-01

    Job-shop scheduling (JSS) problems are among the more difficult to solve in the class of NP-complete problems. The only successful approach has been branch-and-bound based algorithms, but such algorithms depend heavily on good bound functions. Much work has been done to identify such functions...... for the JSS problem, but with limited success. Even with recent methods, it is still not possible to solve problems substantially larger than 10 machines and 10 jobs. In the current study, we focus on parallel methods for solving JSS problems. We implement two different parallel branch-and-bound algorithms...

  3. Backfilling with Fairness and Slack for Parallel Job Scheduling

    International Nuclear Information System (INIS)

    Sodan, Angela C; Wei Jin

    2010-01-01

    Parallel job scheduling typically combines a basic policy like FCFS with backfilling, i.e. moving jobs to an earlier than their regular scheduling position if they do not delay the jobs ahead in the queue according to the rules of the backfilling approach applied. Commonly used are conservative and easy backfilling which either have worse response times but better predictability or better response times and poor predictability. The paper proposes a relaxation of conservative backfilling by permitting to shift jobs within certain constraints to backfill more jobs and reduce fragmentation and subsequently obtain better response times. At the same time, deviation from fairness is kept low and predictability remains high. The results of the experimentation evaluation show that the goals are met, with response-time performance lying as expected between conservative and easy backfilling.

  4. Backfilling with Fairness and Slack for Parallel Job Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Sodan, Angela C; Wei Jin, E-mail: acsodan@uwindsor.ca [University of Windsor, Computer Science, Windsor, Ontario (Canada)

    2010-11-01

    Parallel job scheduling typically combines a basic policy like FCFS with backfilling, i.e. moving jobs to an earlier than their regular scheduling position if they do not delay the jobs ahead in the queue according to the rules of the backfilling approach applied. Commonly used are conservative and easy backfilling which either have worse response times but better predictability or better response times and poor predictability. The paper proposes a relaxation of conservative backfilling by permitting to shift jobs within certain constraints to backfill more jobs and reduce fragmentation and subsequently obtain better response times. At the same time, deviation from fairness is kept low and predictability remains high. The results of the experimentation evaluation show that the goals are met, with response-time performance lying as expected between conservative and easy backfilling.

  5. A Graph-Based Approach to Action Scheduling in a Parallel Database System

    NARCIS (Netherlands)

    Grefen, P.W.P.J.; Apers, Peter M.G.

    Parallel database machines are meant to obtain high performance in transaction processing, both in terms of response time adn throughput. To obtain high performance, a good scheduling of the execution of the various actions in transactions is crucial. This paper describes a graph-based technique for

  6. Variable Neighborhood Search for Parallel Machines Scheduling Problem with Step Deteriorating Jobs

    Directory of Open Access Journals (Sweden)

    Wenming Cheng

    2012-01-01

    Full Text Available In many real scheduling environments, a job processed later needs longer time than the same job when it starts earlier. This phenomenon is known as scheduling with deteriorating jobs to many industrial applications. In this paper, we study a scheduling problem of minimizing the total completion time on identical parallel machines where the processing time of a job is a step function of its starting time and a deteriorating date that is individual to all jobs. Firstly, a mixed integer programming model is presented for the problem. And then, a modified weight-combination search algorithm and a variable neighborhood search are employed to yield optimal or near-optimal schedule. To evaluate the performance of the proposed algorithms, computational experiments are performed on randomly generated test instances. Finally, computational results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time even for large-sized problems.

  7. An Extended Flexible Job Shop Scheduling Model for Flight Deck Scheduling with Priority, Parallel Operations, and Sequence Flexibility

    Directory of Open Access Journals (Sweden)

    Lianfei Yu

    2017-01-01

    Full Text Available Efficient scheduling for the supporting operations of aircrafts in flight deck is critical to the aircraft carrier, and even several seconds’ improvement may lead to totally converse outcome of a battle. In the paper, we ameliorate the supporting operations of carrier-based aircrafts and investigate three simultaneous operation relationships during the supporting process, including precedence constraints, parallel operations, and sequence flexibility. Furthermore, multifunctional aircrafts have to take off synergistically and participate in a combat cooperatively. However, their takeoff order must be restrictively prioritized during the scheduling period accorded by certain operational regulations. To efficiently prioritize the takeoff order while minimizing the total time budget on the whole takeoff duration, we propose a novel mixed integer liner programming formulation (MILP for the flight deck scheduling problem. Motivated by the hardness of MILP, we design an improved differential evolution algorithm combined with typical local search strategies to improve computational efficiency. We numerically compare the performance of our algorithm with the classical genetic algorithm and normal differential evolution algorithm and the results show that our algorithm obtains better scheduling schemes that can meet both the operational relations and the takeoff priority requirements.

  8. Multiple Independent File Parallel I/O with HDF5

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M. C.

    2016-07-13

    The HDF5 library has supported the I/O requirements of HPC codes at Lawrence Livermore National Labs (LLNL) since the late 90’s. In particular, HDF5 used in the Multiple Independent File (MIF) parallel I/O paradigm has supported LLNL code’s scalable I/O requirements and has recently been gainfully used at scales as large as O(106) parallel tasks.

  9. Short term scheduling of multiple grid-parallel PEM fuel cells for microgrid applications

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkh, M.Y.; Rahman, A.; Alam, M.S. [Dept. of Electrical and Computer Engineering, University of South Alabama, Mobile, AL 36688 (United States)

    2010-10-15

    This paper presents a short term scheduling scheme for multiple grid-parallel PEM fuel cell power plants (FCPPs) connected to supply electrical and thermal energy to a microgrid community. As in the case of regular power plants, short term scheduling of FCPP is also a cost-based optimization problem that includes the cost of operation, thermal power recovery, and the power trade with the local utility grid. Due to the ability of the microgrid community to trade power with the local grid, the power balance constraint is not applicable, other constraints like the real power operating limits of the FCPP, and minimum up and down time are therefore used. To solve the short term scheduling problem of the FCPPs, a hybrid technique based on evolutionary programming (EP) and hill climbing technique (HC) is used. The EP is used to estimate the optimal schedule and the output power from each FCPP. The HC technique is used to monitor the feasibility of the solution during the search process. The short term scheduling problem is used to estimate the schedule and the electrical and thermal power output of five FCPPs supplying a maximum power of 300 kW. (author)

  10. Dynamic workload balancing of parallel applications with user-level scheduling on the Grid

    CERN Document Server

    Korkhov, Vladimir V; Krzhizhanovskaya, Valeria V

    2009-01-01

    This paper suggests a hybrid resource management approach for efficient parallel distributed computing on the Grid. It operates on both application and system levels, combining user-level job scheduling with dynamic workload balancing algorithm that automatically adapts a parallel application to the heterogeneous resources, based on the actual resource parameters and estimated requirements of the application. The hybrid environment and the algorithm for automated load balancing are described, the influence of resource heterogeneity level is measured, and the speedup achieved with this technique is demonstrated for different types of applications and resources.

  11. Comparative Simulation Study of Production Scheduling in the Hybrid and the Parallel Flow

    Directory of Open Access Journals (Sweden)

    Varela Maria L.R.

    2017-06-01

    Full Text Available Scheduling is one of the most important decisions in production control. An approach is proposed for supporting users to solve scheduling problems, by choosing the combination of physical manufacturing system configuration and the material handling system settings. The approach considers two alternative manufacturing scheduling configurations in a two stage product oriented manufacturing system, exploring the hybrid flow shop (HFS and the parallel flow shop (PFS environments. For illustrating the application of the proposed approach an industrial case from the automotive components industry is studied. The main aim of this research to compare results of study of production scheduling in the hybrid and the parallel flow, taking into account the makespan minimization criterion. Thus the HFS and the PFS performance is compared and analyzed, mainly in terms of the makespan, as the transportation times vary. The study shows that the performance HFS is clearly better when the work stations’ processing times are unbalanced, either in nature or as a consequence of the addition of transport times just to one of the work station processing time but loses advantage, becoming worse than the performance of the PFS configuration when the work stations’ processing times are balanced, either in nature or as a consequence of the addition of transport times added on the work stations’ processing times. This means that physical layout configurations along with the way transport time are including the work stations’ processing times should be carefully taken into consideration due to its influence on the performance reached by both HFS and PFS configurations.

  12. Postulated licensing schedule for an independent spent-fuel-storage installation

    International Nuclear Information System (INIS)

    Ludwick, J.D.

    1982-11-01

    A review of licensing requirements, processes, and anticipated actions for independent spent fuel storage installations (ISFSIs) was conducted in order to develop an estimated schedule and sequence of events for licensing a new ISFSI. This estimate will be useful to potential ISFSI owners in planning for the licensing of their facilities. It is concluded that, although many uncertainties exist with respect to such things as legal appeals, about 29 months are estimated to elapse between license application and license issuance for an ISFSI. This estimate is in reasonable agreement with a previous time estimate for licensing an ISFSI, and, taking into account the special circumstances involved, with the actual licensing schedule for the GE-Morris ISFSI. However, individual portions of the licensing schedule from each case studied sometimes vary significantly

  13. Memory Retrieval Given Two Independent Cues: Cue Selection or Parallel Access?

    Science.gov (United States)

    Rickard, Timothy C.; Bajic, Daniel

    2004-01-01

    A basic but unresolved issue in the study of memory retrieval is whether multiple independent cues can be used concurrently (i.e., in parallel) to recall a single, common response. A number of empirical results, as well as potentially applicable theories, suggest that retrieval can proceed in parallel, though Rickard (1997) set forth a model that…

  14. Identical parallel machine scheduling with nonlinear deterioration and multiple rate modifying activities

    Directory of Open Access Journals (Sweden)

    Ömer Öztürkoğlu

    2017-07-01

    Full Text Available This study focuses on identical parallel machine scheduling of jobs with deteriorating processing times and rate-modifying activities. We consider non linearly increasing processing times of jobs based on their position assignment. Rate modifying activities are also considered to recover the increase in processing times of jobs due to deterioration. We also propose heuristics algorithms that rely on ant colony optimization and simulated annealing algorithms to solve the problem with multiple RMAs in a reasonable amount of time. Finally, we show that ant colony optimization algorithm generates close optimal solutions and superior results than simulated annealing algorithm.

  15. Comparing the performance of different meta-heuristics for unweighted parallel machine scheduling

    Directory of Open Access Journals (Sweden)

    Adamu, Mumuni Osumah

    2015-08-01

    Full Text Available This article considers the due window scheduling problem to minimise the number of early and tardy jobs on identical parallel machines. This problem is known to be NP complete and thus finding an optimal solution is unlikely. Three meta-heuristics and their hybrids are proposed and extensive computational experiments are conducted. The purpose of this paper is to compare the performance of these meta-heuristics and their hybrids and to determine the best among them. Detailed comparative tests have also been conducted to analyse the different heuristics with the simulated annealing hybrid giving the best result.

  16. A Hybrid Genetic Algorithm to Minimize Total Tardiness for Unrelated Parallel Machine Scheduling with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Chunfeng Liu

    2013-01-01

    Full Text Available The paper presents a novel hybrid genetic algorithm (HGA for a deterministic scheduling problem where multiple jobs with arbitrary precedence constraints are processed on multiple unrelated parallel machines. The objective is to minimize total tardiness, since delays of the jobs may lead to punishment cost or cancellation of orders by the clients in many situations. A priority rule-based heuristic algorithm, which schedules a prior job on a prior machine according to the priority rule at each iteration, is suggested and embedded to the HGA for initial feasible schedules that can be improved in further stages. Computational experiments are conducted to show that the proposed HGA performs well with respect to accuracy and efficiency of solution for small-sized problems and gets better results than the conventional genetic algorithm within the same runtime for large-sized problems.

  17. Robust Parallel Machine Scheduling Problem with Uncertainties and Sequence-Dependent Setup Time

    Directory of Open Access Journals (Sweden)

    Hongtao Hu

    2016-01-01

    Full Text Available A parallel machine scheduling problem in plastic production is studied in this paper. In this problem, the processing time and arrival time are uncertain but lie in their respective intervals. In addition, each job must be processed together with a mold while jobs which belong to one family can share the same mold. Therefore, time changing mold is required for two consecutive jobs that belong to different families, which is known as sequence-dependent setup time. This paper aims to identify a robust schedule by min–max regret criterion. It is proved that the scenario incurring maximal regret for each feasible solution lies in finite extreme scenarios. A mixed integer linear programming formulation and an exact algorithm are proposed to solve the problem. Moreover, a modified artificial bee colony algorithm is developed to solve large-scale problems. The performance of the presented algorithm is evaluated through extensive computational experiments and the results show that the proposed algorithm surpasses the exact method in terms of objective value and computational time.

  18. Multiobjective Variable Neighborhood Search algorithm for scheduling independent jobs on computational grid

    Directory of Open Access Journals (Sweden)

    S. Selvi

    2015-07-01

    Full Text Available Grid computing solves high performance and high-throughput computing problems through sharing resources ranging from personal computers to super computers distributed around the world. As the grid environments facilitate distributed computation, the scheduling of grid jobs has become an important issue. In this paper, an investigation on implementing Multiobjective Variable Neighborhood Search (MVNS algorithm for scheduling independent jobs on computational grid is carried out. The performance of the proposed algorithm has been evaluated with Min–Min algorithm, Simulated Annealing (SA and Greedy Randomized Adaptive Search Procedure (GRASP algorithm. Simulation results show that MVNS algorithm generally performs better than other metaheuristics methods.

  19. Massively Parallel Dimension Independent Adaptive Metropolis

    KAUST Repository

    Chen, Yuxin

    2015-05-14

    This work considers black-box Bayesian inference over high-dimensional parameter spaces. The well-known and widely respected adaptive Metropolis (AM) algorithm is extended herein to asymptotically scale uniformly with respect to the underlying parameter dimension, by respecting the variance, for Gaussian targets. The result- ing algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justified a posteriori). Asymptoti- cally in dimension, this massively parallel dimension-independent adaptive Metropolis (MPDIAM) GPU implementation exhibits a factor of four improvement versus the CPU-based Intel MKL version alone, which is itself already a factor of three improve- ment versus the serial version. The scaling to multiple CPUs and GPUs exhibits a form of strong scaling in terms of the time necessary to reach a certain convergence criterion, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. This is illustrated by e ciently sampling from several Gaussian and non-Gaussian targets for dimension d 1000.

  20. Cloud Computing Task Scheduling Based on Cultural Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Li Jian-Wen

    2016-01-01

    Full Text Available The task scheduling strategy based on cultural genetic algorithm(CGA is proposed in order to improve the efficiency of task scheduling in the cloud computing platform, which targets at minimizing the total time and cost of task scheduling. The improved genetic algorithm is used to construct the main population space and knowledge space under cultural framework which get independent parallel evolution, forming a mechanism of mutual promotion to dispatch the cloud task. Simultaneously, in order to prevent the defects of the genetic algorithm which is easy to fall into local optimum, the non-uniform mutation operator is introduced to improve the search performance of the algorithm. The experimental results show that CGA reduces the total time and lowers the cost of the scheduling, which is an effective algorithm for the cloud task scheduling.

  1. Parallel Multi-Objective Genetic Algorithm for Short-Term Economic Environmental Hydrothermal Scheduling

    Directory of Open Access Journals (Sweden)

    Zhong-Kai Feng

    2017-01-01

    Full Text Available With the increasingly serious energy crisis and environmental pollution, the short-term economic environmental hydrothermal scheduling (SEEHTS problem is becoming more and more important in modern electrical power systems. In order to handle the SEEHTS problem efficiently, the parallel multi-objective genetic algorithm (PMOGA is proposed in the paper. Based on the Fork/Join parallel framework, PMOGA divides the whole population of individuals into several subpopulations which will evolve in different cores simultaneously. In this way, PMOGA can avoid the wastage of computational resources and increase the population diversity. Moreover, the constraint handling technique is used to handle the complex constraints in SEEHTS, and a selection strategy based on constraint violation is also employed to ensure the convergence speed and solution feasibility. The results from a hydrothermal system in different cases indicate that PMOGA can make the utmost of system resources to significantly improve the computing efficiency and solution quality. Moreover, PMOGA has competitive performance in SEEHTS when compared with several other methods reported in the previous literature, providing a new approach for the operation of hydrothermal systems.

  2. Element-topology-independent preconditioners for parallel finite element computations

    Science.gov (United States)

    Park, K. C.; Alexander, Scott

    1992-01-01

    A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.

  3. Literature Review on the Hybrid Flow Shop Scheduling Problem with Unrelated Parallel Machines

    Directory of Open Access Journals (Sweden)

    Eliana Marcela Peña Tibaduiza

    2017-01-01

    Full Text Available Context: The flow shop hybrid problem with unrelated parallel machines has been less studied in the academia compared to the flow shop hybrid with identical processors. For this reason, there are few reports about the kind of application of this problem in industries. Method: A literature review of the state of the art on flow-shop scheduling problem was conducted by collecting and analyzing academic papers on several scientific databases. For this aim, a search query was constructed using keywords defining the problem and checking the inclusion of unrelated parallel machines in such definition; as a result, 50 papers were finally selected for this study. Results: A classification of the problem according to the characteristics of the production system was performed, also solution methods, constraints and objective functions commonly used are presented. Conclusions: An increasing trend is observed in studies of flow shop with multiple stages, but few are based on industry case-studies.

  4. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  5. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  6. Real-time objects development: Study and proposal for a parallel scheduling architecture

    International Nuclear Information System (INIS)

    Rioux, Laurent

    1997-01-01

    This thesis contributes to the programming and the execution control of real-time object oriented applications. Using real-time objects is very interesting for programming real- time applications, because this model can introduce the concurrence with the encapsulation properties, with modularity and reusability by taking into account the real-time constraints of the application. One essential quality of this approach is that it can directly specify the parallelism and the real-time constraints at the model level of the application. An annotation system of C++ has been defined to describe the real-time specifications in the model (or in the source code) of the application. It will supply to the execution support the different information it needs for the control. In this approach of multitasking, the control is distributed and encapsulated inside each real time object. Three complementary levels of control have been defined: the state level (defining the capability of an object to treat an operation), the concurrence level (assuring the coherence between the object attributes) and a scheduling control (allocating the processors resources to the object by taking real-time constraints into account). The proposed control architecture, named OROS, manages the attribute access of each object in an individual way, then it can parallel treatments which do not access at the same data. This architecture makes a dynamic control of an application that can take benefit from the parallelism of the new machines both for the execution parallelism and the control itself. This architecture uses only the simplest primitives of the industrial real-time operating systems which ensures its feasibility and portability. (author) [fr

  7. Meta-heuristic algorithms for parallel identical machines scheduling problem with weighted late work criterion and common due date.

    Science.gov (United States)

    Xu, Zhenzhen; Zou, Yongxing; Kong, Xiangjie

    2015-01-01

    To our knowledge, this paper investigates the first application of meta-heuristic algorithms to tackle the parallel machines scheduling problem with weighted late work criterion and common due date ([Formula: see text]). Late work criterion is one of the performance measures of scheduling problems which considers the length of late parts of particular jobs when evaluating the quality of scheduling. Since this problem is known to be NP-hard, three meta-heuristic algorithms, namely ant colony system, genetic algorithm, and simulated annealing are designed and implemented, respectively. We also propose a novel algorithm named LDF (largest density first) which is improved from LPT (longest processing time first). The computational experiments compared these meta-heuristic algorithms with LDF, LPT and LS (list scheduling), and the experimental results show that SA performs the best in most cases. However, LDF is better than SA in some conditions, moreover, the running time of LDF is much shorter than SA.

  8. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    Science.gov (United States)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  9. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    Science.gov (United States)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  10. A Three-Stage Optimization Algorithm for the Stochastic Parallel Machine Scheduling Problem with Adjustable Production Rates

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2013-01-01

    Full Text Available We consider a parallel machine scheduling problem with random processing/setup times and adjustable production rates. The objective functions to be minimized consist of two parts; the first part is related with the due date performance (i.e., the tardiness of the jobs, while the second part is related with the setting of machine speeds. Therefore, the decision variables include both the production schedule (sequences of jobs and the production rate of each machine. The optimization process, however, is significantly complicated by the stochastic factors in the manufacturing system. To address the difficulty, a simulation-based three-stage optimization framework is presented in this paper for high-quality robust solutions to the integrated scheduling problem. The first stage (crude optimization is featured by the ordinal optimization theory, the second stage (finer optimization is implemented with a metaheuristic called differential evolution, and the third stage (fine-tuning is characterized by a perturbation-based local search. Finally, computational experiments are conducted to verify the effectiveness of the proposed approach. Sensitivity analysis and practical implications are also discussed.

  11. Preemptive scheduling with rejection

    NARCIS (Netherlands)

    Hoogeveen, H.; Skutella, M.; Woeginger, Gerhard

    2003-01-01

    We consider the problem of preemptively scheduling a set of n jobs on m (identical, uniformly related, or unrelated) parallel machines. The scheduler may reject a subset of the jobs and thereby incur job-dependent penalties for each rejected job, and he must construct a schedule for the remaining

  12. Preemptive scheduling with rejection

    NARCIS (Netherlands)

    Hoogeveen, J.A.; Skutella, M.; Woeginger, G.J.; Paterson, M.

    2000-01-01

    We consider the problem of preemptively scheduling a set of n jobs on m (identical, uniformly related, or unrelated) parallel machines. The scheduler may reject a subset of the jobs and thereby incur job-dependent penalties for each rejected job, and he must construct a schedule for the remaining

  13. Device-independent parallel self-testing of two singlets

    Science.gov (United States)

    Wu, Xingyao; Bancal, Jean-Daniel; McKague, Matthew; Scarani, Valerio

    2016-06-01

    Device-independent self-testing offers the possibility of certifying the quantum state and measurements, up to local isometries, using only the statistics observed by querying uncharacterized local devices. In this paper we study parallel self-testing of two maximally entangled pairs of qubits; in particular, the local tensor product structure is not assumed but derived. We prove two criteria that achieve the desired result: a double use of the Clauser-Horne-Shimony-Holt inequality and the 3 ×3 magic square game. This demonstrate that the magic square game can only be perfectly won by measuring a two-singlet state. The tolerance to noise is well within reach of state-of-the-art experiments.

  14. Prophylactic treatment with a potent corticosteroid cream ameliorates radiodermatitis, independent of radiation schedule

    DEFF Research Database (Denmark)

    Ulff, Eva; Maroti, Marianne; Serup, Jörgen

    2017-01-01

    BACKGROUND AND PURPOSE: The study will test the hypothesis that preventive topical steroid treatment instituted from start of radiotherapy can ameliorate acute radiation dermatitis. Subgroups of increased risk of dermatitis are included. MATERIAL AND METHODS: A double blinded randomized trial...... of acute radiation dermatitis in breast cancer patients treated with adjuvant RT, independent of RT schedule. Preventive application of a potent corticosteroid cream should be used in the routine and instituted at the start of RT....... schedules as well as for anatomical sites, skin type, breast size and BMI. Patients treated the irradiated area during the radiation period and two weeks following cessation of radiation. RESULTS: Patients receiving hypofraction RT developed less skin reactions than those treated with conventional RT...

  15. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    Science.gov (United States)

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  16. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  17. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  18. A FAST AND ELITIST BI-OBJECTIVE EVOLUTIONARY ALGORITHM FOR SCHEDULING INDEPENDENT TASKS ON HETEROGENEOUS SYSTEMS

    Directory of Open Access Journals (Sweden)

    G.Subashini

    2010-07-01

    Full Text Available To meet the increasing computational demands, geographically distributed resources need to be logically coupled to make them work as a unified resource. In analyzing the performance of such distributed heterogeneous computing systems scheduling a set of tasks to the available set of resources for execution is highly important. Task scheduling being an NP-complete problem, use of metaheuristics is more appropriate in obtaining optimal solutions. Schedules thus obtained can be evaluated using several criteria that may conflict with one another which require multi objective problem formulation. This paper investigates the application of an elitist Nondominated Sorting Genetic Algorithm (NSGA-II, to efficiently schedule a set of independent tasks in a heterogeneous distributed computing system. The objectives considered in this paper include minimizing makespan and average flowtime simultaneously. The implementation of NSGA-II algorithm and Weighted-Sum Genetic Algorithm (WSGA has been tested on benchmark instances for distributed heterogeneous systems. As NSGA-II generates a set of Pareto optimal solutions, to verify the effectiveness of NSGA-II over WSGA a fuzzy based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto solution set.

  19. Approximating Preemptive Stochastic Scheduling

    OpenAIRE

    Megow Nicole; Vredeveld Tjark

    2009-01-01

    We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...

  20. CMS multicore scheduling strategy

    International Nuclear Information System (INIS)

    Yzquierdo, Antonio Pérez-Calero; Hernández, Jose; Holzman, Burt; Majewski, Krista; McCrea, Alison

    2014-01-01

    In the next years, processor architectures based on much larger numbers of cores will be most likely the model to continue 'Moore's Law' style throughput gains. This not only results in many more jobs in parallel running the LHC Run 1 era monolithic applications, but also the memory requirements of these processes push the workernode architectures to the limit. One solution is parallelizing the application itself, through forking and memory sharing or through threaded frameworks. CMS is following all of these approaches and has a comprehensive strategy to schedule multicore jobs on the GRID based on the glideinWMS submission infrastructure. The main component of the scheduling strategy, a pilot-based model with dynamic partitioning of resources that allows the transition to multicore or whole-node scheduling without disallowing the use of single-core jobs, is described. This contribution also presents the experiences made with the proposed multicore scheduling schema and gives an outlook of further developments working towards the restart of the LHC in 2015.

  1. An Evaluation of Parallel Job Scheduling for ASCI Blue-Pacific

    International Nuclear Information System (INIS)

    Franke, H.; Jann, J.; Moreira, J.; Pattnaik, P.; Jette, M.

    1999-01-01

    In this paper we analyze the behavior of a gang-scheduling strategy that we are developing for the ASCI Blue-Pacific machines. Using actual job logs for one of the ASCI machines we generate a statistical model of the current workload with hyper Erlang distributions. We then vary the parameters of those distributions to generate various workloads, representative of different operating points of the machine. Through simulation we obtain performance parameters for three different scheduling strategies: (i) first-come first-serve, (ii) gang-scheduling, and (iii) backfilling. Our results show that backfilling, can be very effective for the common operating points in the 60-70% utilization range. However, for higher utilization rates, time-sharing techniques such as gang-scheduling offer much better performance

  2. Approximation algorithms for the parallel flow shop problem

    NARCIS (Netherlands)

    X. Zhang (Xiandong); S.L. van de Velde (Steef)

    2012-01-01

    textabstractWe consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson's

  3. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    Science.gov (United States)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  4. Parallel generation of architecture on the GPU

    KAUST Repository

    Steinberger, Markus

    2014-05-01

    In this paper, we present a novel approach for the parallel evaluation of procedural shape grammars on the graphics processing unit (GPU). Unlike previous approaches that are either limited in the kind of shapes they allow, the amount of parallelism they can take advantage of, or both, our method supports state of the art procedural modeling including stochasticity and context-sensitivity. To increase parallelism, we explicitly express independence in the grammar, reduce inter-rule dependencies required for context-sensitive evaluation, and introduce intra-rule parallelism. Our rule scheduling scheme avoids unnecessary back and forth between CPU and GPU and reduces round trips to slow global memory by dynamically grouping rules in on-chip shared memory. Our GPU shape grammar implementation is multiple orders of magnitude faster than the standard in CPU-based rule evaluation, while offering equal expressive power. In comparison to the state of the art in GPU shape grammar derivation, our approach is nearly 50 times faster, while adding support for geometric context-sensitivity. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  5. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Harald Michalik

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  6. Characterizing and Mitigating Work Time Inflation in Task Parallel Programs

    Directory of Open Access Journals (Sweden)

    Stephen L. Olivier

    2013-01-01

    Full Text Available Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems. Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.

  7. Parallel Array Bistable Stochastic Resonance System with Independent Input and Its Signal-to-Noise Ratio Improvement

    Directory of Open Access Journals (Sweden)

    Wei Li

    2014-01-01

    with independent components and averaged output; second, we give a deduction of the output signal-to-noise ratio (SNR for this system to show the performance. Our examples show the enhancement of the system and how different parameters influence the performance of the proposed parallel array.

  8. Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure

    Science.gov (United States)

    Matyska, Ludek; Ruda, Miroslav; Toth, Simon

    For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.

  9. Unit-time scheduling problems with time dependent resources

    NARCIS (Netherlands)

    Tautenhahn, T.; Woeginger, G.

    1997-01-01

    We investigate the computational complexity of scheduling problems, where the operations consume certain amounts of renewable resources which are available in time-dependent quantities. In particular, we consider unit-time open shop problems and unit-time scheduling problems with identical parallel

  10. Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm

    Science.gov (United States)

    Sun, Haisheng; Xu, Rui; Chen, Huaping

    2018-04-01

    To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.

  11. An economic lot and delivery scheduling problem with the fuzzy shelf life in a flexible job shop with unrelated parallel machines

    Directory of Open Access Journals (Sweden)

    S. Dousthaghi

    2012-08-01

    Full Text Available This paper considers an economic lot and delivery scheduling problem (ELDSP in a fuzzy environment with the fuzzy shelf life for each product. This problem is formulated in a flexible job shop with unrelated parallel machines, when the planning horizon is finite and it determines lot sizing, scheduling and sequencing, simultaneously. The proposed model of this paper is based on the basic period (BP approach. In this paper, a mixed-integer nonlinear programming (MINLP model is presented and then it is changed into two models in the fuzzy shelf life. The main model is dependent to the multiple basic periods and it is difficult to solve the resulted proposed model for large-scale problems in reasonable amount of time; thus, an efficient heuristic method is proposed to solve the problem. The performance of the proposed model is demonstrated using some numerical examples.

  12. JIST: Just-In-Time Scheduling Translation for Parallel Processors

    Directory of Open Access Journals (Sweden)

    Giovanni Agosta

    2005-01-01

    Full Text Available The application fields of bytecode virtual machines and VLIW processors overlap in the area of embedded and mobile systems, where the two technologies offer different benefits, namely high code portability, low power consumption and reduced hardware cost. Dynamic compilation makes it possible to bridge the gap between the two technologies, but special attention must be paid to software instruction scheduling, a must for the VLIW architectures. We have implemented JIST, a Virtual Machine and JIT compiler for Java Bytecode targeted to a VLIW processor. We show the impact of various optimizations on the performance of code compiled with JIST through the experimental study on a set of benchmark programs. We report significant speedups, and increments in the number of instructions issued per cycle up to 50% with respect to the non-scheduling version of the JITcompiler. Further optimizations are discussed.

  13. Performance analysis of job scheduling policies in parallel supercomputing environments

    Energy Technology Data Exchange (ETDEWEB)

    Naik, V.K.; Squillante, M.S. [IBM T.J. Watson Research Center, Yorktown Heights, NY (United States); Setia, S.K. [George Mason Univ., Fairfax, VA (United States). Dept. of Computer Science

    1993-12-31

    In this paper the authors analyze three general classes of scheduling policies under a workload typical of largescale scientific computing. These policies differ in the manner in which processors are partitioned among the jobs as well as the way in which jobs are prioritized for execution on the partitions. Their results indicate that existing static schemes do not perform well under varying workloads. Adaptive policies tend to make better scheduling decisions, but their ability to adjust to workload changes is limited. Dynamic partitioning policies, on the other hand, yield the best performance and can be tuned to provide desired performance differences among jobs with varying resource demands.

  14. Scheduling preemptable jobs on identical processors under varying availability of an additional continuous resource

    Directory of Open Access Journals (Sweden)

    Różycki Rafał

    2016-09-01

    Full Text Available In this work we consider a problem of scheduling preemptable, independent jobs, characterized by the fact that their processing speeds depend on the amounts of a continuous, renewable resource allocated to jobs at a time. Jobs are scheduled on parallel, identical machines, with the criterion of minimization of the schedule length. Since two categories of resources occur in the problem: discrete (set of machines and continuous, it is generally called a discrete-continuous scheduling problem. The model studied in this paper allows the total available amount of the continuous resource to vary over time, which is a practically important generalization that has not been considered yet for discrete-continuous scheduling problems. For this model we give some properties of optimal schedules on a basis of which we propose a general methodology for solving the considered class of problems. The methodology uses a two-phase approach in which, firstly, an assignment of machines to jobs is defined and, secondly, for this assignment an optimal continuous resource allocation is found by solving an appropriate mathematical programming problem. In the approach various cases are considered, following from assumptions made on the form of the processing speed functions of jobs. For each case an iterative algorithm is designed, leading to an optimal solution in a finite number of steps.

  15. Optimal load scheduling in commercial and residential microgrids

    Science.gov (United States)

    Ganji Tanha, Mohammad Mahdi

    Residential and commercial electricity customers use more than two third of the total energy consumed in the United States, representing a significant resource of demand response. Price-based demand response, which is in response to changes in electricity prices, represents the adjustments in load through optimal load scheduling (OLS). In this study, an efficient model for OLS is developed for residential and commercial microgrids which include aggregated loads in single-units and communal loads. Single unit loads which include fixed, adjustable and shiftable loads are controllable by the unit occupants. Communal loads which include pool pumps, elevators and central heating/cooling systems are shared among the units. In order to optimally schedule residential and commercial loads, a community-based optimal load scheduling (CBOLS) is proposed in this thesis. The CBOLS schedule considers hourly market prices, occupants' comfort level, and microgrid operation constraints. The CBOLS' objective in residential and commercial microgrids is the constrained minimization of the total cost of supplying the aggregator load, defined as the microgrid load minus the microgrid generation. This problem is represented by a large-scale mixed-integer optimization for supplying single-unit and communal loads. The Lagrangian relaxation methodology is used to relax the linking communal load constraint and decompose the independent single-unit functions into subproblems which can be solved in parallel. The optimal solution is acceptable if the aggregator load limit and the duality gap are within the bounds. If any of the proposed criteria is not satisfied, the Lagrangian multiplier will be updated and a new optimal load schedule will be regenerated until both constraints are satisfied. The proposed method is applied to several case studies and the results are presented for the Galvin Center load on the 16th floor of the IIT Tower in Chicago.

  16. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-07-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  17. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    International Nuclear Information System (INIS)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-01-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  18. Simple and robust generation of ultrafast laser pulse trains using polarization-independent parallel-aligned thin films

    Science.gov (United States)

    Wang, Andong; Jiang, Lan; Li, Xiaowei; Wang, Zhi; Du, Kun; Lu, Yongfeng

    2018-05-01

    Ultrafast laser pulse temporal shaping has been widely applied in various important applications such as laser materials processing, coherent control of chemical reactions, and ultrafast imaging. However, temporal pulse shaping has been limited to only-in-lab technique due to the high cost, low damage threshold, and polarization dependence. Herein we propose a novel design of ultrafast laser pulse train generation device, which consists of multiple polarization-independent parallel-aligned thin films. Various pulse trains with controllable temporal profile can be generated flexibly by multi-reflections within the splitting films. Compared with other pulse train generation techniques, this method has advantages of compact structure, low cost, high damage threshold and polarization independence. These advantages endow it with high potential for broad utilization in ultrafast applications.

  19. Scheduling Additional Train Unit Services on Rail Transit Lines

    OpenAIRE

    Zhibin Jiang; Yuyan Tan; Özgür Yalçınkaya

    2014-01-01

    This paper deals with the problem of scheduling additional train unit (TU) services in a double parallel rail transit line, and a mixed integer programming (MIP) model is formulated for integration strategies of new trains connected by TUs with the objective of obtaining higher frequencies in some special sections and special time periods due to mass passenger volumes. We took timetable scheduling and TUs scheduling as an integrated optimization model with two objectives: minimizing travel ti...

  20. Are behaviors at one alternative in concurrent schedules independent of contingencies at the other alternative?

    Science.gov (United States)

    MacDonall, James S

    2017-09-01

    Some have reported changing the schedule at one alternative of a concurrent schedule changed responding at the other alternative (Catania, 1969), which seems odd because no contingencies were changed there. When concurrent schedules are programmed using two schedules, one associated with each alternative that operate continuously, changing the schedule at one alternative also changes the switch schedule at the other alternative. Thus, changes in responding at the constant alternative could be due to the change in the switch schedule. To assess this possibility, six rats were exposed to a series of conditions that alternated between pairs of interval schedules at both alternatives and a pair of interval schedules at one, constant, alternative and a pair of extinction schedules at the other alternative. Comparing run lengths, visit durations and response rates at the constant alternative in the alternating conditions did not show consistent increases and decreases when a strict criterion for changes was used. Using a less stringent definition (any change in mean values) showed changes. The stay/switch analysis suggests it may be inaccurate to apply behavioral contrast to procedures that change from concurrent variable-interval variable-interval schedules to concurrent variable-interval extinction schedules because the contingencies in neither alternative are constant. © 2017 Society for the Experimental Analysis of Behavior.

  1. The power of reordering for online minimum makespan scheduling

    OpenAIRE

    Englert, Matthias; Özmen, Deniz; Westermann, Matthias

    2014-01-01

    In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we do not require that each arriving job has to be assigned immediately to one of the machines. A reordering buffer with limited...

  2. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  3. A comparative critical analysis of modern task-parallel runtimes.

    Energy Technology Data Exchange (ETDEWEB)

    Wheeler, Kyle Bruce; Stark, Dylan; Murphy, Richard C.

    2012-12-01

    The rise in node-level parallelism has increased interest in task-based parallel runtimes for a wide array of application areas. Applications have a wide variety of task spawning patterns which frequently change during the course of application execution, based on the algorithm or solver kernel in use. Task scheduling and load balance regimes, however, are often highly optimized for specific patterns. This paper uses four basic task spawning patterns to quantify the impact of specific scheduling policy decisions on execution time. We compare the behavior of six publicly available tasking runtimes: Intel Cilk, Intel Threading Building Blocks (TBB), Intel OpenMP, GCC OpenMP, Qthreads, and High Performance ParalleX (HPX). With the exception of Qthreads, the runtimes prove to have schedulers that are highly sensitive to application structure. No runtime is able to provide the best performance in all cases, and those that do provide the best performance in some cases, unfortunately, provide extremely poor performance when application structure does not match the schedulers assumptions.

  4. Optimal Temporal Decoupling in Task Scheduling with Preferences

    NARCIS (Netherlands)

    Endhoven, L.; Klos, T.B.; Witteveen, C.

    2011-01-01

    Multi-agent planning and scheduling concerns finding a joint plan to achieve some set of common goals with several independent agents each aiming to find a plan or schedule for their part of the goals. To avoid conflicts in these individual plans or schedules decoupling is used. Such a decoupling

  5. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  6. Complexity of preemptive minsum scheduling on unrelated parallel machines

    NARCIS (Netherlands)

    Sitters, R.A.

    2005-01-01

    We show that the problems of minimizing total completion time and of minimizing the number of late jobs on unrelated parallel machines, when preemption is allowed, are both NP-hard in the strong sense. The former result settles a long-standing open question and is remarkable since the non-preemptive

  7. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  8. Massively Parallel Dimension Independent Adaptive Metropolis

    KAUST Repository

    Chen, Yuxin

    2015-01-01

    parameter dimension, by respecting the variance, for Gaussian targets. The result- ing algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non

  9. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  10. Sub-polyhedral scheduling using (unit-)two-variable-per-inequality polyhedra

    OpenAIRE

    Upadrasta , Ramakrishna; Cohen , Albert

    2013-01-01

    International audience; Polyhedral compilation has been successful in the design and implementation of complex loop nest optimizers and parallelizing compilers. The algorithmic complexity and scalability limitations remain one important weakness. We address it using sub-polyhedral under-aproximations of the systems of constraints resulting from affine scheduling problems. We propose a sub-polyhedral scheduling technique using (Unit-)Two-Variable-Per-Inequality or (U)TVPI Polyhedra. This techn...

  11. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  12. Target objects defined by a conjunction of colour and shape can be selected independently and in parallel.

    Science.gov (United States)

    Jenkins, Michael; Grubert, Anna; Eimer, Martin

    2017-11-01

    It is generally assumed that during search for targets defined by a feature conjunction, attention is allocated sequentially to individual objects. We tested this hypothesis by tracking the time course of attentional processing biases with the N2pc component in tasks where observers searched for two targets defined by a colour/shape conjunction. In Experiment 1, two displays presented in rapid succession (100 ms or 10 ms SOA) each contained a target and a colour-matching or shape-matching distractor on opposite sides. Target objects in both displays elicited N2pc components of similar size that overlapped in time when the SOA was 10 ms, suggesting that attention was allocated in parallel to both targets. Analogous results were found in Experiment 2, where targets and partially matching distractors were both accompanied by an object without target-matching features. Colour-matching and shape-matching distractors also elicited N2pc components, and the target N2pc was initially identical to the sum of the two distractor N2pcs, suggesting that the initial phase of attentional object selection was guided independently by feature templates for target colour and shape. Beyond 230 ms after display onset, the target N2pc became superadditive, indicating that attentional selection processes now started to be sensitive to the presence of feature conjunctions. Results show that independent attentional selection processes can be activated in parallel by two target objects in situations where these objects are defined by a feature conjunction.

  13. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  14. NASA Instrument Cost/Schedule Model

    Science.gov (United States)

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  15. A Model for Speedup of Parallel Programs

    Science.gov (United States)

    1997-01-01

    Sanjeev. K Setia . The interaction between mem- ory allocation and adaptive partitioning in message- passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [15] Sanjeev K. Setia and Satish K. Tripathi. A compar- ative analysis of static

  16. Parallel Volunteer Learning during Youth Programs

    Science.gov (United States)

    Lesmeister, Marilyn K.; Green, Jeremy; Derby, Amy; Bothum, Candi

    2012-01-01

    Lack of time is a hindrance for volunteers to participate in educational opportunities, yet volunteer success in an organization is tied to the orientation and education they receive. Meeting diverse educational needs of volunteers can be a challenge for program managers. Scheduling a Volunteer Learning Track for chaperones that is parallel to a…

  17. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  18. Design issues in the semantics and scheduling of asynchronous tasks.

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, Stephen L.

    2013-07-01

    The asynchronous task model serves as a useful vehicle for shared memory parallel programming, particularly on multicore and manycore processors. As adoption of model among programmers has increased, support has emerged for the integration of task parallel language constructs into mainstream programming languages, e.g., C and C++. This paper examines some of the design decisions in Cilk and OpenMP concerning semantics and scheduling of asynchronous tasks with the aim of informing the efforts of committees considering language integration, as well as developers of new task parallel languages and libraries.

  19. A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.

    Science.gov (United States)

    Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary

    2017-12-01

    Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

  20. ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms

    Science.gov (United States)

    Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François

    2015-10-01

    Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.

  1. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  2. Distributed project scheduling at NASA: Requirements for manual protocols and computer-based support

    Science.gov (United States)

    Richards, Stephen F.

    1992-01-01

    The increasing complexity of space operations and the inclusion of interorganizational and international groups in the planning and control of space missions lead to requirements for greater communication, coordination, and cooperation among mission schedulers. These schedulers must jointly allocate scarce shared resources among the various operational and mission oriented activities while adhering to all constraints. This scheduling environment is complicated by such factors as the presence of varying perspectives and conflicting objectives among the schedulers, the need for different schedulers to work in parallel, and limited communication among schedulers. Smooth interaction among schedulers requires the use of protocols that govern such issues as resource sharing, authority to update the schedule, and communication of updates. This paper addresses the development and characteristics of such protocols and their use in a distributed scheduling environment that incorporates computer-aided scheduling tools. An example problem is drawn from the domain of Space Shuttle mission planning.

  3. Attentional Selection of Feature Conjunctions Is Accomplished by Parallel and Independent Selection of Single Features.

    Science.gov (United States)

    Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A

    2015-07-08

    Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those

  4. Parallel Device-Independent Quantum Key Distribution

    OpenAIRE

    Jain, Rahul; Miller, Carl A.; Shi, Yaoyun

    2017-01-01

    A prominent application of quantum cryptography is the distribution of cryptographic keys with unconditional security. Recently, such security was extended by Vazirani and Vidick (Physical Review Letters, 113, 140501, 2014) to the device-independent (DI) scenario, where the users do not need to trust the integrity of the underlying quantum devices. The protocols analyzed by them and by subsequent authors all require a sequential execution of N multiplayer games, where N is the security parame...

  5. Static Schedulers for Embedded Real-Time Systems

    Science.gov (United States)

    1989-12-01

    Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required

  6. Next Generation CANDU: Conceptual Design for a Short Construction Schedule

    International Nuclear Information System (INIS)

    Hopwood, Jerry M.; Love, Ian J.W.; Elgohary, Medhat; Fairclough, Neville

    2002-01-01

    Atomic Energy of Canada Ltd. (AECL) has very successful experience in implementing new construction methods at the Qinshan (Phase III) twin unit CANDU 6 plant in China. This paper examines the construction method that must be implemented during the conceptual design phase of a project if short construction schedules are to be met. A project schedule of 48 months has been developed for the nth unit of NG (Next Generation) CANDU with a 42 month construction period from 1. Concrete to In-Service. An overall construction strategy has been developed involving paralleling project activities that are normally conducted in series. Many parts of the plant will be fabricated as modules and be installed using heavy lift cranes. The Reactor Building (RB), being on the critical path, has been the focus of considerable assessment, looking at alternative ways of applying the construction strategy to this building. A construction method has been chosen which will result in excess of 80% of internal work being completed as modules or as very streamlined traditional construction. This method is being further evaluated as the detailed layout proceeds. Other areas of the plant have been integrated into the schedule and new construction methods are being applied to these so that further modularization and even greater paralleling of activities will be achieved. It is concluded that the optimized construction method is a requirement, which must be implemented through all phases of design to make a 42 month construction schedule a reality. If the construction methods are appropriately chosen, the schedule reductions achieved will make nuclear more competitive. (authors)

  7. A novel approach to analyzing fMRI and SNP data via parallel independent component analysis

    Science.gov (United States)

    Liu, Jingyu; Pearlson, Godfrey; Calhoun, Vince; Windemuth, Andreas

    2007-03-01

    There is current interest in understanding genetic influences on brain function in both the healthy and the disordered brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The method aims to identify the independent components of each modality and the relationship between the two modalities. We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control group. In summary, we constructed a framework to identify the interactions between brain functional and genetic information; our findings provide new insight into understanding genetic influences on brain function in a common mental disorder.

  8. The robust schedule - A link to improved workflow

    DEFF Research Database (Denmark)

    Lindhard, Søren; Wandahl, Søren

    2012-01-01

    -down the contractors, and force them to rigorously adhere to the initial schedule. If delayed the work-pace or manpower has to be increased to observe the schedule. In attempt to improve productivity three independent site-mangers have been interviewed about time-scheduling. Their experiences and opinions have been...... analyzed and weaknesses in existing time scheduling have been found. The findings showed a negative side effect of keeping the schedule to tight. A too tight schedule is inflexible and cannot absorb variability in production. Flexibility is necessary because of the contractors interacting and dependable....... The result is a chaotic, complex and uncontrolled construction site. Furthermore, strict time limits entail the workflow to be optimized under non-optimal conditions. Even though productivity seems to be increasing, productivity per man-hour is decreasing resulting in increased cost. To increase productivity...

  9. Solution Approaches for the Parallel Identical Machine Scheduling Problem with Sequence Dependent Setups

    National Research Council Canada - National Science Library

    Anderson, Bradley

    2002-01-01

    ... delivery is an important scheduling objective in the just-in-time (JIT) environment. Items produced too early incur holding costs, while items produced too late incur costs in the form of dissatisfied customers...

  10. An optimal algorithm for preemptive on-line scheduling

    NARCIS (Netherlands)

    Chen, B.; Vliet, van A.; Woeginger, G.J.

    1995-01-01

    We investigate the problem of on-line scheduling jobs on m identical parallel machines where preemption is allowed. The goal is to minimize the makespan. We derive an approximation algorithm with worst-case guarantee mm/(mm - (m - 1)m) for every m 2, which increasingly tends to e/(e - 1) ˜ 1.58 as m

  11. Decentralization and mechanism design for online machine scheduling

    NARCIS (Netherlands)

    Arge, Lars; Heydenreich, Birgit; Müller, Rudolf; Freivalds, Rusins; Uetz, Marc Jochen

    We study the online version of the classical parallel machine scheduling problem to minimize the total weighted completion time from a new perspective: We assume that the data of each job, namely its release date $r_j$, its processing time $p_j$ and its weight $w_j$ is only known to the job itself,

  12. A proposal simulated annealing algorithm for proportional parallel flow shops with separated setup times

    Directory of Open Access Journals (Sweden)

    Helio Yochihiro Fuchigami

    2014-08-01

    Full Text Available This article addresses the problem of minimizing makespan on two parallel flow shops with proportional processing and setup times. The setup times are separated and sequence-independent. The parallel flow shop scheduling problem is a specific case of well-known hybrid flow shop, characterized by a multistage production system with more than one machine working in parallel at each stage. This situation is very common in various kinds of companies like chemical, electronics, automotive, pharmaceutical and food industries. This work aimed to propose six Simulated Annealing algorithms, their perturbation schemes and an algorithm for initial sequence generation. This study can be classified as “applied research” regarding the nature, “exploratory” about the objectives and “experimental” as to procedures, besides the “quantitative” approach. The proposed algorithms were effective regarding the solution and computationally efficient. Results of Analysis of Variance (ANOVA revealed no significant difference between the schemes in terms of makespan. It’s suggested the use of PS4 scheme, which moves a subsequence of jobs, for providing the best percentage of success. It was also found that there is a significant difference between the results of the algorithms for each value of the proportionality factor of the processing and setup times of flow shops.

  13. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    Science.gov (United States)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  14. Logical inference techniques for loop parallelization

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2012-01-01

    the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S={}, where S is a set expression representing array indexes. Using...... of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECT-CLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers....

  15. Gain-Scheduled Control of a Fossil-Fired Power Plant Boiler

    DEFF Research Database (Denmark)

    Hangstrup, M.; Stoustrup, Jakob; Andersen, Palle

    1999-01-01

    -scheduling which interpolates between unstable controllers is not allowed using traditional schemes. The results show that a considerable optimization of the conventional controlled system is obtainable. Also the gain-scheduled optimizing controller is seen to have a superior performance compared to the fixed LTI......In this paper the objective is to optimize the control of a coal fired 250 MW power plant boiler. The conventional control system is supplemented with a multivariable optimizing controller operating in parallel with the conventional control system. Due to the strong dependence of the gains...... and dynamics upon the load, it is beneficial to consider a gain-scheduling control approach. Optimization using complex mu synthesis results in unstable LTI controllers in some operating points of the boiler. A recent gain-scheduling approach allowing for unstable fixed LTI controllers is applied. Gain...

  16. Solving the Selective Multi-Category Parallel-Servicing Problem

    DEFF Research Database (Denmark)

    Range, Troels Martin; Lusby, Richard Martin; Larsen, Jesper

    In this paper we present a new scheduling problem and describe a shortest path based heuristic as well as a dynamic programming based exact optimization algorithm to solve it. The Selective Multi-Category Parallel-Servicing Problem (SMCPSP) arises when a set of jobs has to be scheduled on a server...... (machine) with limited capacity. Each job requests service in a prespecified time window and belongs to a certain category. Jobs may be serviced partially, incurring a penalty; however, only jobs of the same category can be processed simultaneously. One must identify the best subset of jobs to process...

  17. Solving the selective multi-category parallel-servicing problem

    DEFF Research Database (Denmark)

    Range, Troels Martin; Lusby, Richard Martin; Larsen, Jesper

    2015-01-01

    In this paper, we present a new scheduling problem and describe a shortest path-based heuristic as well as a dynamic programming-based exact optimization algorithm to solve it. The selective multi-category parallel-servicing problem arises when a set of jobs has to be scheduled on a server (machine......) with limited capacity. Each job requests service in a prespecified time window and belongs to a certain category. Jobs may be serviced partially, incurring a penalty; however, only jobs of the same category can be processed simultaneously. One must identify the best subset of jobs to process in each time...

  18. A dataflow analysis tool for parallel processing of algorithms

    Science.gov (United States)

    Jones, Robert L., III

    1993-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on a set of identical parallel processors. Typical applications include signal processing and control law problems. Graph analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool is shown to facilitate the application of the design process to a given problem.

  19. Scheduling with target start times

    NARCIS (Netherlands)

    Hoogeveen, J.A.; Velde, van de S.L.; Klein Haneveld, W.K.; Vrieze, O.J.; Kallenberg, L.C.M.

    1997-01-01

    We address the single-machine problem of scheduling n independent jobs subject to target start times. Target start times are essentially release times that may be violated at a certain cost. The goal is to minimize an objective function that is composed of total completion time and maximum

  20. A parallel row-based algorithm for standard cell placement with integrated error control

    Science.gov (United States)

    Sargent, Jeff S.; Banerjee, Prith

    1989-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.

  1. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    Science.gov (United States)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  2. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  3. Comparing and Optimising Parallel Haskell Implementations for Multicore Machines

    DEFF Research Database (Denmark)

    Berthold, Jost; Marlow, Simon; Hammond, Kevin

    2009-01-01

    In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects running on multicore machines. GpH and Eden are both constructed using the highly-optimising sequential GHC compiler, and share thread scheduling, and other elements, from a common code base. The ...

  4. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.; Rauchwerger, Lawrence

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  5. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  6. OS and Runtime Support for Efficiently Managing Cores in Parallel Applications

    OpenAIRE

    Klues, Kevin Alan

    2015-01-01

    Parallel applications can benefit from the ability to explicitly control their thread scheduling policies in user-space. However, modern operating systems lack the interfaces necessary to make this type of “user-level” scheduling efficient. The key component missing is the ability for applications to gain direct access to cores and keep control of those cores even when making I/O operations that traditionally block in the kernel. A number of former systems provided limited support for these c...

  7. The FORCE: A highly portable parallel programming language

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.

  8. The FORCE - A highly portable parallel programming language

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.

  9. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    Science.gov (United States)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  10. Responsive versus scheduled feeding in preterm infants (Review)

    OpenAIRE

    Watson, Julie; McGuire, William

    2015-01-01

    Scheduled feeding of prescribed enteral volumes remains standard practice for preterm infants. However, feeding preterm infants in response to their feeding and satiation cues (responsive, cue-based, or infant led feeding) rather than at scheduled intervals might enhance parent experience and satisfaction, help in the establishment of independent oral feeding, increase nutrient intake and growth rates, and allow earlier hospital discharge.\\ud \\ud Objectives: To assess the effect of feeding pr...

  11. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch

    1998-12-11

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  12. Tiling as a Durable Abstraction for Parallelism and Data Locality

    Energy Technology Data Exchange (ETDEWEB)

    Unat, Didem [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chan, Cy P. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Zhang, Weiqun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-11-18

    Tiling is a useful loop transformation for expressing parallelism and data locality. Automated tiling transformations that preserve data-locality are increasingly important due to hardware trends towards massive parallelism and the increasing costs of data movement relative to the cost of computing. We propose TiDA as a durable tiling abstraction that centralizes parameterized tiling information within array data types with minimal changes to the source code. The data layout information can be used by the compiler and runtime to automatically manage parallelism, optimize data locality, and schedule tasks intelligently. In this study, we present the design features and early interface of TiDA along with some preliminary results.

  13. Development and evaluation of a scheduling algorithm for parallel hardware tests at CERN

    CERN Document Server

    Galetzka, Michael

    This thesis aims at describing the problem of scheduling, evaluating different scheduling algorithms and comparing them with each other as well as with the current prototype solution. The implementation of the final solution will be delineated, as will the design considerations that led to it. The CERN Large Hadron Collider (LHC) has to deal with unprecedented stored energy, both in its particle beams and its superconducting magnet circuits. This energy could result in major equipment damage and downtime if it is not properly extracted from the machine. Before commissioning the machine with the particle beam, several thousands of tests have to be executed, analyzed and tracked to assess the proper functioning of the equipment and protection systems. These tests access the accelerator's equipment in order to verify the correct behavior of all systems, such as magnets, power converters and interlock controllers. A test could, for example, ramp the magnet to a certain energy level and then provoke an emergency...

  14. Power Minimization for Parallel Real-Time Systems with Malleable Jobs and Homogeneous Frequencies

    OpenAIRE

    Paolillo, Antonio; Goossens, Joël; Hettiarachchi, Pradeep M.; Fisher, Nathan

    2014-01-01

    In this work, we investigate the potential benefit of parallelization for both meeting real-time constraints and minimizing power consumption. We consider malleable Gang scheduling of implicit-deadline sporadic tasks upon multiprocessors. By extending schedulability criteria for malleable jobs to DVFS-enabled multiprocessor platforms, we are able to derive an offline polynomial-time optimal processor/frequency-selection algorithm. Simulations of our algorithm on randomly generated task system...

  15. ADVANCED SCHEDULER FOR COOPERATIVE EXECUTION OF THREADS ON MULTI-CORE SYSTEM

    Directory of Open Access Journals (Sweden)

    O. N. Karasik

    2017-01-01

    Full Text Available Three architectures of the cooperative thread scheduler in a multithreaded application that is executed on a multi-core system are considered. Architecture A0 is based on the synchronization and scheduling facilities, which are provided by the operating system. Architecture A1 introduces a new synchronization primitive and a single queue of the blocked threads in the scheduler, which reduces the interaction activity between the threads and operating system, and significantly speed up the processes of blocking and unblocking the threads. Architecture A2 replaces the single queue of blocked threads with dedicated queues, one for each of the synchronizing primitives, extends the number of internal states of the primitive, reduces the inter- dependence of the scheduling threads, and further significantly speeds up the processes of blocking and unblocking the threads. All scheduler architectures are implemented on Windows operating systems and based on the User Mode Scheduling. Important experimental results are obtained for multithreaded applications that implement two blocked parallel algorithms of solving the linear algebraic equation systems by the Gaussian elimination. The algorithms differ in the way of the data distribution among threads and by the thread synchronization models. The number of threads varied from 32 to 7936. Architecture A1 shows the acceleration of up to 8.65% and the architecture A2 shows the acceleration of up to 11.98% compared to A0 architecture for the blocked parallel algorithms computing the triangular form and performing the back substitution. On the back substitution stage of the algorithms, architecture A1 gives the acceleration of up to 125%, and architecture A2 gives the acceleration of up to 413% compared to architecture A0. The experiments clearly show that the proposed architectures, A1 and A2 outperform A0 depending on the number of thread blocking and unblocking operations, which happen during the execution of

  16. Efficient multitasking: parallel versus serial processing of multiple tasks.

    Science.gov (United States)

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  17. A review of scheduling problem and resolution methods in flexible flow shop

    Directory of Open Access Journals (Sweden)

    Tian-Soon Lee

    2019-01-01

    Full Text Available The Flexible flow shop (FFS is defined as a multi-stage flow shops with multiple parallel machines. FFS scheduling problem is a complex combinatorial problem which has been intensively studied in many real world industries. This review paper gives a comprehensive exploration review on the FFS scheduling problem and guides the reader by considering and understanding different environmental assumptions, system constraints and objective functions for future research works. The published papers are classified into two categories. First is the FFS system characteristics and constraints including the problem differences and limitation defined by different studies. Second, the scheduling performances evaluation are elaborated and categorized into time, job and multi related objectives. In addition, the resolution approaches that have been used to solve FFS scheduling problems are discussed. This paper gives a comprehensive guide for the reader with respect to future research work on the FFS scheduling problem.

  18. Scheduling the scheduling task : a time management perspective on scheduling

    NARCIS (Netherlands)

    Larco Martinelli, J.A.; Wiers, V.C.S.; Fransoo, J.C.

    2013-01-01

    Time is the most critical resource at the disposal of schedulers. Hence, an adequate management of time from the schedulers may impact positively on the scheduler’s productivity and responsiveness to uncertain scheduling environments. This paper presents a field study of how schedulers make use of

  19. Quality assurance and independent dosimetry for an intraoperative x-ray device

    International Nuclear Information System (INIS)

    Eaton, D. J.

    2012-01-01

    Purpose: Quality assurance is an essential component of accurate and safe radiotherapy delivery, and should include measurements which are independent of manufacturer-provided calibration. However, the physical and dosimetric properties of the INTRABEAM compact mobile 50 kV x-ray source are different from conventional kilovoltage therapy units and few reports describe methods for independent checks, frequencies, or tolerances for quality assurance tests. Methods: Based on the available evidence and local experience, methods are described for determination of the key dosimetric parameters: beam quality, output, isotropy, and depth doses. Internal system checks are also described, along with measurements of long-term stability. Results: A small volume parallel plate ionization chamber in a liquid water tank is the gold standard for measurements with this unit, but solid water-equivalent materials, thermoluminescent dosimeters and radiochromic film can all be used as practical alternatives with an accuracy of 5%–10%. The main cause of measurement uncertainty is positioning of the detector in the steep dose gradient, but energy dependence should also be considered. Conclusions: A quality assurance schedule with suggested tolerances is proposed, which includes both internal tests, before each treatment and on a monthly basis, and independent tests every year or after servicing or recalibration.

  20. Integrated network design and scheduling problems :

    Energy Technology Data Exchange (ETDEWEB)

    Nurre, Sarah G.; Carlson, Jeffrey J.

    2014-01-01

    We consider the class of integrated network design and scheduling problems. These problems focus on selecting and scheduling operations that will change the characteristics of a network, while being speci cally concerned with the performance of the network over time. Motivating applications of INDS problems include infrastructure restoration after extreme events and building humanitarian distribution supply chains. While similar models have been proposed, no one has performed an extensive review of INDS problems from their complexity, network and scheduling characteristics, information, and solution methods. We examine INDS problems under a parallel identical machine scheduling environment where the performance of the network is evaluated by solving classic network optimization problems. We classify that all considered INDS problems as NP-Hard and propose a novel heuristic dispatching rule algorithm that selects and schedules sets of arcs based on their interactions in the network. We present computational analysis based on realistic data sets representing the infrastructures of coastal New Hanover County, North Carolina, lower Manhattan, New York, and a realistic arti cial community CLARC County. These tests demonstrate the importance of a dispatching rule to arrive at near-optimal solutions during real-time decision making activities. We extend INDS problems to incorporate release dates which represent the earliest an operation can be performed and exible release dates through the introduction of specialized machine(s) that can perform work to move the release date earlier in time. An online optimization setting is explored where the release date of a component is not known.

  1. Machine scheduling to minimize weighted completion times the use of the α-point

    CERN Document Server

    Gusmeroli, Nicoló

    2018-01-01

    This work reviews the most important results regarding the use of the α-point in Scheduling Theory. It provides a number of different LP-relaxations for scheduling problems and seeks to explain their polyhedral consequences. It also explains the concept of the α-point and how the conversion algorithm works, pointing out the relations to the sum of the weighted completion times. Lastly, the book explores the latest techniques used for many scheduling problems with different constraints, such as release dates, precedences, and parallel machines. This reference book is intended for advanced undergraduate and postgraduate students who are interested in scheduling theory. It is also inspiring for researchers wanting to learn about sophisticated techniques and open problems of the field.

  2. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  3. Two NP-hardness results for preemptive minsum scheduling of unrelated parallel machines

    NARCIS (Netherlands)

    Sitters, R.A.; Aardal, K.; Gerards, B.

    2001-01-01

    We show that the problems of minimizing total completion time and of minimizing the number of late jobs on unrelated parallel machines, when preemption is allowed, are both NP-hard in the strong sense. The former result settles a long-standing open question.

  4. Influences on cocaine tolerance assessed under a multiple conjunctive schedule of reinforcement.

    Science.gov (United States)

    Yoon, Jin Ho; Branch, Marc N

    2009-11-01

    Under multiple schedules of reinforcement, previous research has generally observed tolerance to the rate-decreasing effects of cocaine that has been dependent on schedule-parameter size in the context of fixed-ratio (FR) schedules, but not under the context of fixed-interval (FI) schedules of reinforcement. The current experiment examined the effects of cocaine on key-pecking responses of White Carneau pigeons maintained under a three-component multiple conjunctive FI (10 s, 30 s, & 120 s) FR (5 responses) schedule of food presentation. Dose-effect curves representing the effects of presession cocaine on responding were assessed in the context of (1) acute administration of cocaine (2) chronic administration of cocaine and (3) daily administration of saline. Chronic administration of cocaine generally resulted in tolerance to the response-rate decreasing effects of cocaine, and that tolerance was generally independent of relative FI value, as measured by changes in ED50 values. Daily administration of saline decreased ED50 values to those observed when cocaine was administered acutely. The results show that adding a FR requirement to FI schedules is not sufficient to produce schedule-parameter-specific tolerance. Tolerance to cocaine was generally independent of FI-parameter under the present conjunctive schedules, indicating that a ratio requirement, per se, is not sufficient for tolerance to be dependent on FI parameter.

  5. A qualitative single case study of parallel processes

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    2007-01-01

    Parallel process in psychotherapy and supervision is a phenomenon manifest in relationships and interactions, that originates in one setting and is reflected in another. This article presents an explorative single case study of parallel processes based on qualitative analyses of two successive...... randomly chosen psychotherapy sessions with a schizophrenic patient and the supervision session given in between. The author's analysis is verified by an independent examiner's analysis. Parallel processes are identified and described. Reflections on the dynamics of parallel processes and supervisory...

  6. PARTICAL SWARM OPTIMIZATION OF TASK SCHEDULING IN CLOUD COMPUTING

    OpenAIRE

    Payal Jaglan*, Chander Diwakar

    2016-01-01

    Resource provisioning and pricing modeling in cloud computing makes it an inevitable technology both on developer and consumer end. Easy accessibility of software and freedom of hardware configuration increase its demand in IT industry. It’s ability to provide a user-friendly environment, software independence, quality, pricing index and easy accessibility of infrastructure via internet. Task scheduling plays an important role in cloud computing systems. Task scheduling in cloud computing mea...

  7. Efficient bounding schemes for the two-center hybrid flow shop scheduling problem with removal times.

    Science.gov (United States)

    Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.

  8. Manufacturing scheduling systems an integrated view on models, methods and tools

    CERN Document Server

    Framinan, Jose M; Ruiz García, Rubén

    2014-01-01

    The book is devoted to the problem of manufacturing scheduling, which is the efficient allocation of jobs (orders) over machines (resources) in a manufacturing facility. It offers a comprehensive and integrated perspective on the different aspects required to design and implement systems to efficiently and effectively support manufacturing scheduling decisions. Obtaining economic and reliable schedules constitutes the core of excellence in customer service and efficiency in manufacturing operations. Therefore, scheduling forms an area of vital importance for competition in manufacturing companies. However, only a fraction of scheduling research has been translated into practice, due to several reasons. First, the inherent complexity of scheduling has led to an excessively fragmented field in which different sub problems and issues are treated in an independent manner as goals themselves, therefore lacking a unifying view of the scheduling problem. Furthermore, mathematical brilliance and elegance has sometime...

  9. 'Re-zoning' proximal development in a parallel e-learning course ...

    African Journals Online (AJOL)

    'Re-zoning' proximal development in a parallel e-learning course. ... Journal Home > Vol 22, No 4 (2002) >. Log in or Register to get access to full text downloads. ... This twinning course was introduced to expand learning opportunities in what we ... face-to-face curriculum with less scheduled teaching time than previously.

  10. Single-machine scheduling of proportionally deteriorating jobs by two agents

    OpenAIRE

    S Gawiejnowicz; W-C Lee; C-L Lin; C-C Wu

    2011-01-01

    We consider a problem of scheduling a set of independent jobs by two agents on a single machine. Every agent has its own subset of jobs to be scheduled and uses its own optimality criterion. The processing time of each job proportionally deteriorates with respect to the starting time of the job. The problem is to find a schedule that minimizes the total tardiness of the first agent, provided that no tardy job is allowed for the second agent. We prove basic properties of the problem and give a...

  11. Status and schedule of J-PARC 50 GeV synchrotron

    International Nuclear Information System (INIS)

    Oogoe, Takao; Yoshioka, Masakazu; Kobayashi, Hitoshi; Takeuchi, Yasunori; Shirakata, Masashi; Shirakabe, Yoshihisa; Kuniyasu, Yuu; Oki, Hiroshi; Takiyama, Youichi

    2005-01-01

    Japan Proton Accelerator Research Complex (J-PARC) is the research complex based on three high intensity proton Accelerators: a linac, a 3 GeV synchrotron (RCS), and a 50 GeV synchrotron (MR). The construction of the MR started in 2002, and its beam commissioning is scheduled in January of 2008. The accelerator tunnel of the J-PARC 50 GeV Synchrotron is still under construction, and will be completed at the end of 2006. Installation of accelerator-components is scheduled to start in July 2005 in parallel with civil and utility construction. This document describes how to install accelerator components in the tunnel and civil engineering of the tunnel. (author)

  12. Schedules of electric shock presentation in the behavioral control of imprinted ducklings.

    Science.gov (United States)

    Barrett, J E

    1972-09-01

    The behavioral effects of various schedules of electric shock presentation were investigated during and after the imprinting of Peking ducklings to moving stimuli. The behavior of following a moving imprinted stimulus was differentially controlled by a multiple schedule of punishment and avoidance that respectively suppressed and maintained following behavior. Pole-pecking, reinforced by presentations of the imprinted stimulus, was suppressed by response-produced shock (punishment); various schedules of response-independent shock and delayed punishment had an overall minimal effect. The delivery of response-independent shock in the presence of one of two stimuli, both during and after imprinting, resulted in a marked reduction in choice of the stimulus paired with shock. The experiments provide no support for a differentiation of imprinting from learning on the basis of the behavioral effects of aversive stimuli. Instead, as is the case with other organisms, the schedule under which shock is delivered to imprinted ducklings appears to be an important determinant of the temporal patterning of subsequent behavior.

  13. Signaling added response-independent reinforcement to assess Pavlovian processes in resistance to change and relapse.

    Science.gov (United States)

    Podlesnik, Christopher A; Fleet, James D

    2014-09-01

    Behavioral momentum theory asserts Pavlovian stimulus-reinforcer relations govern the persistence of operant behavior. Specifically, resistance to conditions of disruption (e.g., extinction, satiation) reflects the relation between discriminative stimuli and the prevailing reinforcement conditions. The present study assessed whether Pavlovian stimulus-reinforcer relations govern resistance to disruption in pigeons by arranging both response-dependent and -independent food reinforcers in two components of a multiple schedule. In one component, discrete-stimulus changes preceded response-independent reinforcers, paralleling methods that reduce Pavlovian conditioned responding to contextual stimuli. Compared to the control component with no added stimuli preceding response-independent reinforcement, response rates increased as discrete-stimulus duration increased (0, 5, 10, and 15 s) across conditions. Although resistance to extinction decreased as stimulus duration increased in the component with the added discrete stimulus, further tests revealed no effect of discrete stimuli, including other disrupters (presession food, intercomponent food, modified extinction) and reinstatement designed to control for generalization decrement. These findings call into question a straightforward conception that the stimulus-reinforcer relations governing resistance to disruption reflect the same processes as Pavlovian conditioning, as asserted by behavioral momentum theory. © Society for the Experimental Analysis of Behavior.

  14. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  15. Preemptive scheduling in a two-stage multiprocessor flow shop is NP-hard

    NARCIS (Netherlands)

    Hoogeveen, J.A.; Lenstra, J.K.; Veltman, B.

    1996-01-01

    In 1954, Johnson gave an efficient algorithm for minimizing makespan in a two-machine flow shop; there is no advantage to preemption in this case. McNaughton's wrap-around rule of 1959 finds a shortest preemptive schedule on identical parallel machines in linear time. A similarly efficient algorithm

  16. Optimal task mapping in safety-critical real-time parallel systems

    International Nuclear Information System (INIS)

    Aussagues, Ch.

    1998-01-01

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author)

  17. NUMA-Aware Thread Scheduling for Big Data Transfers over Terabits Network Infrastructure

    Directory of Open Access Journals (Sweden)

    Taeuk Kim

    2018-01-01

    Full Text Available The evergrowing trend of big data has led scientists to share and transfer the simulation and analytical data across the geodistributed research and computing facilities. However, the existing data transfer frameworks used for data sharing lack the capability to adopt the attributes of the underlying parallel file systems (PFS. LADS (Layout-Aware Data Scheduling is an end-to-end data transfer tool optimized for terabit network using a layout-aware data scheduling via PFS. However, it does not consider the NUMA (Nonuniform Memory Access architecture. In this paper, we propose a NUMA-aware thread and resource scheduling for optimized data transfer in terabit network. First, we propose distributed RMA buffers to reduce memory controller contention in CPU sockets and then schedule the threads based on CPU socket and NUMA nodes inside CPU socket to reduce memory access latency. We design and implement the proposed resource and thread scheduling in the existing LADS framework. Experimental results showed from 21.7% to 44% improvement with memory-level optimizations in the LADS framework as compared to the baseline without any optimization.

  18. Multiresource allocation and scheduling for periodic soft real-time applications

    Science.gov (United States)

    Gopalan, Kartik; Chiueh, Tzi-cker

    2001-12-01

    Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.

  19. Scheduling reinforcement about once a day.

    Science.gov (United States)

    Eckerman, D A

    1999-04-01

    A pigeon earned its daily food by pecking a key according to reinforcement schedules that produced food about once per day. Fixed-interval (FI), Fixed-time (FT), and various complex schedules were arranged to demonstrate the degree to which a scalloped pattern of responding remained. Pausing continued until about an hour before the reinforcer could be earned for FIs of 12, 24, and 48 h. Pausing was not as long for FIs of 18, 19, and 23 h. Pausing of about 24 h was seen for FI 36 h. FT 24 h produced continued responding but at a diminished frequency. The pattern of responding was strongly controlled by the schedule of reinforcement and seemed relatively independent of the cycle of human activity in the surrounding laboratory. Effects of added ratio contingencies and of signaling the availability of reinforcement in FT were also examined. Signaled FTs of 5 min-3 h produced more responding during the signal (autoshaping) than did FTs of 19 or 24 h.

  20. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  1. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  2. Distributed Hybrid Scheduling in Multi-Cloud Networks using Conflict Graphs

    KAUST Repository

    Douik, Ahmed

    2017-09-07

    Recent studies on cloud-radio access networks assume either signal-level or scheduling-level coordination. This paper considers a hybrid coordinated scheme as a means to benefit from both policies. Consider the downlink of a multi-cloud radio access network, where each cloud is connected to several base-stations (BSs) via high capacity links, and, therefore, allows for joint signal processing within the cloud transmission. Across the multiple clouds, however, only scheduling-level coordination is permitted, as low levels of backhaul communication are feasible. The frame structure of every BS is composed of various time/frequency blocks, called power-zones (PZs), which are maintained at a fixed power level. The paper addresses the problem of maximizing a network-wide utility by associating users to clouds and scheduling them to the PZs, under the practical constraints that each user is scheduled to a single cloud at most, but possibly to many BSs within the cloud, and can be served by one or more distinct PZs within the BSs’ frame. The paper solves the problem using graph theory techniques by constructing the conflict graph. The considered scheduling problem is, then, shown to be equivalent to a maximum-weight independent set problem in the constructed graph, which can be solved using efficient techniques. The paper then proposes solving the problem using both optimal and heuristic algorithms that can be implemented in a distributed fashion across the network. The proposed distributed algorithms rely on the well-chosen structure of the constructed conflict graph utilized to solve the maximum-weight independent set problem. Simulation results suggest that the proposed optimal and heuristic hybrid scheduling strategies provide appreciable gain as compared to the scheduling-level coordinated networks, with a negligible degradation to signal-level coordination.

  3. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  4. Provably optimal parallel transport sweeps on regular grids

    International Nuclear Information System (INIS)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D.; Smith, T.; Rauchwerger, L.; Amato, N. M.; Bailey, T. S.; Falgout, R. D.

    2013-01-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P x x P y x P z partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10 6 . (authors)

  5. Provably optimal parallel transport sweeps on regular grids

    Energy Technology Data Exchange (ETDEWEB)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D. [Dept. of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Smith, T.; Rauchwerger, L.; Amato, N. M. [Dept. of Computer Science and Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Bailey, T. S.; Falgout, R. D. [Lawrence Livermore National Laboratory (United States)

    2013-07-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P{sub x} x P{sub y} x P{sub z} partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10{sup 6}. (authors)

  6. 'Forest governmentality': A genealogy of subject-making of forest-dependent 'scheduled tribes' in India

    NARCIS (Netherlands)

    Bose, P.; Arts, B.J.M.; Dijk, van H.

    2012-01-01

    This paper analyses the historical trajectories of both British colonial rule and independent India to categorise scheduled tribes and to appropriate and legalise forests in tribal areas. It builds upon Foucault's notion of governmentality to argue that the history of the scheduled tribes’

  7. Information Flow Scheduling in Concurrent Multi-Product Development Based on DSM

    Science.gov (United States)

    Sun, Qing-Chao; Huang, Wei-Qiang; Jiang, Ying-Jie; Sun, Wei

    2017-09-01

    Multi-product collaborative development is adopted widely in manufacturing enterprise, while the present multi-project planning models don't take technical/data interactions of multiple products into account. To decrease the influence of technical/data interactions on project progresses, the information flow scheduling models based on the extended DSM is presented. Firstly, information dependencies are divided into four types: series, parallel, coupling and similar. Secondly, different types of dependencies are expressed as DSM units, and the extended DSM model is brought forward, described as a block matrix. Furthermore, the information flow scheduling methods is proposed, which involves four types of operations, where partitioning and clustering algorithm are modified from DSM for ensuring progress of high-priority project, merging and converting is the specific computation of the extended DSM. Finally, the information flow scheduling of two machine tools development is analyzed with example, and different project priorities correspond to different task sequences and total coordination cost. The proposed methodology provides a detailed instruction for information flow scheduling in multi-product development, with specially concerning technical/data interactions.

  8. Concatenating algorithms for parallel numerical simulations coupling radiation hydrodynamics with neutron transport

    International Nuclear Information System (INIS)

    Mo Zeyao

    2004-11-01

    Multiphysics parallel numerical simulations are usually essential to simplify researches on complex physical phenomena in which several physics are tightly coupled. It is very important on how to concatenate those coupled physics for fully scalable parallel simulation. Meanwhile, three objectives should be balanced, the first is efficient data transfer among simulations, the second and the third are efficient parallel executions and simultaneously developments of those simulation codes. Two concatenating algorithms for multiphysics parallel numerical simulations coupling radiation hydrodynamics with neutron transport on unstructured grid are presented. The first algorithm, Fully Loosely Concatenation (FLC), focuses on the independence of code development and the independence running with optimal performance of code. The second algorithm. Two Level Tightly Concatenation (TLTC), focuses on the optimal tradeoffs among above three objectives. Theoretical analyses for communicational complexity and parallel numerical experiments on hundreds of processors on two parallel machines have showed that these two algorithms are efficient and can be generalized to other multiphysics parallel numerical simulations. In especial, algorithm TLTC is linearly scalable and has achieved the optimal parallel performance. (authors)

  9. Parental nonstandard work schedules during infancy and children's BMI trajectories

    Directory of Open Access Journals (Sweden)

    Afshin Zilanawala

    2017-09-01

    Full Text Available Background: Empirical evidence has demonstrated adverse associations between parental nonstandard work schedules (i.e., evenings, nights, or weekends and child developmental outcomes. However, there are mixed findings concerning the relationship between parental nonstandard employment and children's body mass index (BMI, and few studies have incorporated information on paternal work schedules. Objective: This paper investigated BMI trajectories from early to middle childhood (ages 3-11 by parental work schedules at 9 months of age, using nationally representative cohort data from the United Kingdom. This study is the first to examine the link between nonstandard work schedules and children's BMI in the United Kingdom. Methods: We used data from the Millennium Cohort Study (2001‒2013, n = 13,021 to estimate trajectories in BMI, using data from ages 3, 5, 7, and 11 years. Joint parental work schedules and a range of biological, socioeconomic, and psychosocial covariates were assessed in the initial interviews at 9 months. Results: Compared to children in two-parent families where parents worked standard shifts, we found steeper BMI growth trajectories for children in two-parent families where both parents worked nonstandard shifts and children in single-parent families whose mothers worked a standard shift. Fathers' shift work, compared to standard shifts, was independently associated with significant increases in BMI. Conclusions: Future public health initiatives focused on reducing the risk of rapid BMI gain in childhood can potentially consider the disruptions to family processes resulting from working nonstandard hours. Contribution: Children in families in which both parents work nonstandard schedules had steeper BMI growth trajectories across the first decade of life. Fathers' nonstandard shifts were independently associated with increases in BMI.

  10. MULTICRITERIA HYBRID FLOW SHOP SCHEDULING PROBLEM: LITERATURE REVIEW, ANALYSIS, AND FUTURE RESEARCH

    Directory of Open Access Journals (Sweden)

    Marcia de Fatima Morais

    2014-12-01

    Full Text Available This research focuses on the Hybrid Flow Shop production scheduling problem, which is one of the most difficult problems to solve. The literature points to several studies that focus the Hybrid Flow Shop scheduling problem with monocriteria functions. Despite of the fact that, many real world problems involve several objective functions, they can often compete and conflict, leading researchers to concentrate direct their efforts on the development of methods that take consider this variant into consideration. The goal of the study is to review and analyze the methods in order to solve the Hybrid Flow Shop production scheduling problem with multicriteria functions in the literature. The analyses were performed using several papers that have been published over the years, also the parallel machines types, the approach used to develop solution methods, the type of method develop, the objective function, the performance criterion adopted, and the additional constraints considered. The results of the reviewing and analysis of 46 papers showed opportunities for future research on this topic, including the following: (i use uniform and dedicated parallel machines, (ii use exact and metaheuristics approaches, (iv develop lower and uppers bounds, relations of dominance and different search strategies to improve the computational time of the exact methods,  (v develop  other types of metaheuristic, (vi work with anticipatory setups, and (vii add constraints faced by the production systems itself.

  11. Long-term generation scheduling of Xiluodu and Xiangjiaba cascade hydro plants considering monthly streamflow forecasting error

    International Nuclear Information System (INIS)

    Xie, Mengfei; Zhou, Jianzhong; Li, Chunlong; Zhu, Shuang

    2015-01-01

    Highlights: • Monthly streamflow forecasting error is considered. • An improved parallel progressive optimality algorithm is proposed. • Forecasting dispatching chart is manufactured accompanying with a set of rules. • Applications in Xiluodu and Xiangjiaba cascade hydro plants. - Abstract: Reliable streamflow forecasts are very significant for reservoir operation and hydropower generation. But for monthly streamflow forecasting, the forecasting result is unreliable and it is hard to be utilized, although it has a certain reference value for long-term hydro generation scheduling. Current researches mainly focus on deterministic scheduling, and few of them consider the uncertainties. So this paper considers the forecasting error which exists in monthly streamflow forecasting and proposes a new long-term hydro generation scheduling method called forecasting dispatching chart for Xiluodu and Xiangjiaba cascade hydro plants. First, in order to consider the uncertainties of inflow, Monte Carlo simulation is employed to generate streamflow data according to the forecasting value and error distribution curves. Then the large amount of data obtained by Monte Carlo simulation is used as inputs for long-term hydro generation scheduling model. Because of the large amount of streamflow data, the computation speed of conventional algorithm cannot meet the demand. So an improved parallel progressive optimality algorithm is proposed to solve the long-term hydro generation scheduling problem and a series of solutions are obtained. These solutions constitute an interval set, unlike the unique solution in the traditional deterministic long-term hydro generation scheduling. At last, the confidence intervals of the solutions are calculated and forecasting dispatching chart is proposed as a new method for long-term hydro generation scheduling. A set of rules are proposed corresponding to forecasting dispatching chart. The chart is tested for practical operations and achieves

  12. Parallel trajectory similarity joins in spatial networks

    KAUST Repository

    Shang, Shuo

    2018-04-04

    The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider two cases of trajectory similarity joins (TS-Joins), including a threshold-based join (Tb-TS-Join) and a top-k TS-Join (k-TS-Join), where the objects are trajectories of vehicles moving in road networks. Given two sets of trajectories and a threshold θ, the Tb-TS-Join returns all pairs of trajectories from the two sets with similarity above θ. In contrast, the k-TS-Join does not take a threshold as a parameter, and it returns the top-k most similar trajectory pairs from the two sets. The TS-Joins target diverse applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide purposeful definitions of similarity. To enable efficient processing of the TS-Joins on large sets of trajectories, we develop search space pruning techniques and enable use of the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer search framework that lays the foundation for the algorithms for the Tb-TS-Join and the k-TS-Join that rely on different pruning techniques to achieve efficiency. For each trajectory, the algorithms first find similar trajectories. Then they merge the results to obtain the final result. The algorithms for the two joins exploit different upper and lower bounds on the spatiotemporal trajectory similarity and different heuristic scheduling strategies for search space pruning. Their per-trajectory searches are independent of each other and can be performed in parallel, and the mergings have constant cost. An empirical study with real data offers insight in the performance of the algorithms and demonstrates that they are capable of outperforming well-designed baseline algorithms by an order of magnitude.

  13. Parallel trajectory similarity joins in spatial networks

    KAUST Repository

    Shang, Shuo; Chen, Lisi; Wei, Zhewei; Jensen, Christian S.; Zheng, Kai; Kalnis, Panos

    2018-01-01

    The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider two cases of trajectory similarity joins (TS-Joins), including a threshold-based join (Tb-TS-Join) and a top-k TS-Join (k-TS-Join), where the objects are trajectories of vehicles moving in road networks. Given two sets of trajectories and a threshold θ, the Tb-TS-Join returns all pairs of trajectories from the two sets with similarity above θ. In contrast, the k-TS-Join does not take a threshold as a parameter, and it returns the top-k most similar trajectory pairs from the two sets. The TS-Joins target diverse applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide purposeful definitions of similarity. To enable efficient processing of the TS-Joins on large sets of trajectories, we develop search space pruning techniques and enable use of the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer search framework that lays the foundation for the algorithms for the Tb-TS-Join and the k-TS-Join that rely on different pruning techniques to achieve efficiency. For each trajectory, the algorithms first find similar trajectories. Then they merge the results to obtain the final result. The algorithms for the two joins exploit different upper and lower bounds on the spatiotemporal trajectory similarity and different heuristic scheduling strategies for search space pruning. Their per-trajectory searches are independent of each other and can be performed in parallel, and the mergings have constant cost. An empirical study with real data offers insight in the performance of the algorithms and demonstrates that they are capable of outperforming well-designed baseline algorithms by an order of magnitude.

  14. Rein: Taming Tail Latency in Key-Value Stores via Multiget Scheduling

    KAUST Repository

    Reda, Waleed

    2017-04-17

    We tackle the problem of reducing tail latencies in distributed key-value stores, such as the popular Cassandra database.We focus on workloads of multiget requests, which batch together access to several data elements and parallelize read operations across the data store machines. We first analyze a production trace of a real system and quantify the skew due to multiget sizes, key popularity, and other factors. We then proceed to identify opportunities for reduction of tail latencies by recognizing the composition of aggregate requests and by carefully scheduling bottleneck operations that can otherwise create excessive queues. We design and implement a system called Rein, which reduces latency via inter-multiget scheduling using low overhead techniques. We extensively evaluate Rein via experiments in Amazon Web Services (AWS) and simulations. Our scheduling algorithms reduce the median, 95, and 99 percentile latencies by factors of 1.5, 1.5, and 1.9, respectively.

  15. Adaptive scheduling with postexamining user selection under nonidentical fading

    KAUST Repository

    Gaaloul, Fakhreddine

    2012-11-01

    This paper investigates an adaptive scheduling algorithm for multiuser environments with statistically independent but nonidentically distributed (i.n.d.) channel conditions. The algorithm aims to reduce feedback load by sequentially and arbitrarily examining the user channels. It also provides improved performance by realizing postexamining best user selection. The first part of the paper presents new formulations for the statistics of the signal-to-noise ratio (SNR) of the scheduled user under i.n.d. channel conditions. The second part capitalizes on the findings in the first part and presents various performance and processing complexity measures for adaptive discrete-time transmission. The results are then extended to investigate the effect of outdated channel estimates on the statistics of the scheduled user SNR, as well as some performance measures. Numerical results are provided to clarify the usefulness of the scheduling algorithm under perfect or outdated channel estimates. © 1967-2012 IEEE.

  16. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  17. Constraint-based job shop scheduling with ILOG SCHEDULER

    NARCIS (Netherlands)

    Nuijten, W.P.M.; Le Pape, C.

    1998-01-01

    We introduce constraint-based scheduling and discuss its main principles. An approximation algorithm based on tree search is developed for the job shop scheduling problem using ILOG SCHEDULER. A new way of calculating lower bounds on the makespan of the job shop scheduling problem is presented and

  18. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  19. Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate

    Science.gov (United States)

    Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel

    1994-01-01

    This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.

  20. Compiler and Runtime Support for Programming in Adaptive Parallel Environments

    Science.gov (United States)

    1998-10-15

    noother job is waiting for resources, and use a smaller number of processors when other jobs needresources. Setia et al. [15, 20] have shown that such...15] Vijay K. Naik, Sanjeev Setia , and Mark Squillante. Performance analysis of job scheduling policiesin parallel supercomputing environments. In...on networks ofheterogeneous workstations. Technical Report CSE-94-012, Oregon Graduate Institute of Scienceand Technology, 1994.[20] Sanjeev Setia

  1. Test generation for digital circuits using parallel processing

    Science.gov (United States)

    Hartmann, Carlos R.; Ali, Akhtar-Uz-Zaman M.

    1990-12-01

    The problem of test generation for digital logic circuits is an NP-Hard problem. Recently, the availability of low cost, high performance parallel machines has spurred interest in developing fast parallel algorithms for computer-aided design and test. This report describes a method of applying a 15-valued logic system for digital logic circuit test vector generation in a parallel programming environment. A concept called fault site testing allows for test generation, in parallel, that targets more than one fault at a given location. The multi-valued logic system allows results obtained by distinct processors and/or processes to be merged by means of simple set intersections. A machine-independent description is given for the proposed algorithm.

  2. Parallel evolutionary computation in bioinformatics applications.

    Science.gov (United States)

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. Multi-core processing and scheduling performance in CMS

    International Nuclear Information System (INIS)

    Hernández, J M; Evans, D; Foulkes, S

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  4. Caffeine Modulates Vesicle Release and Recovery at Cerebellar Parallel Fibre Terminals, Independently of Calcium and Cyclic AMP Signalling

    Science.gov (United States)

    Dobson, Katharine L.; Jackson, Claire; Balakrishnan, Saju; Bellamy, Tomas C.

    2015-01-01

    Background Cerebellar parallel fibres release glutamate at both the synaptic active zone and at extrasynaptic sites—a process known as ectopic release. These sites exhibit different short-term and long-term plasticity, the basis of which is incompletely understood but depends on the efficiency of vesicle release and recycling. To investigate whether release of calcium from internal stores contributes to these differences in plasticity, we tested the effects of the ryanodine receptor agonist caffeine on both synaptic and ectopic transmission. Methods Whole cell patch clamp recordings from Purkinje neurons and Bergmann glia were carried out in transverse cerebellar slices from juvenile (P16-20) Wistar rats. Key Results Caffeine caused complex changes in transmission at both synaptic and ectopic sites. The amplitude of postsynaptic currents in Purkinje neurons and extrasynaptic currents in Bergmann glia were increased 2-fold and 4-fold respectively, but paired pulse ratio was substantially reduced, reversing the short-term facilitation observed under control conditions. Caffeine treatment also caused synaptic sites to depress during 1 Hz stimulation, consistent with inhibition of the usual mechanisms for replenishing vesicles at the active zone. Unexpectedly, pharmacological intervention at known targets for caffeine—intracellular calcium release, and cAMP signalling—had no impact on these effects. Conclusions We conclude that caffeine increases release probability and inhibits vesicle recovery at parallel fibre synapses, independently of known pharmacological targets. This complex effect would lead to potentiation of transmission at fibres firing at low frequencies, but depression of transmission at high frequency connections. PMID:25933382

  5. GLOA: A New Job Scheduling Algorithm for Grid Computing

    Directory of Open Access Journals (Sweden)

    Zahra Pooranian

    2013-03-01

    Full Text Available The purpose of grid computing is to produce a virtual supercomputer by using free resources available through widespread networks such as the Internet. This resource distribution, changes in resource availability, and an unreliable communication infrastructure pose a major challenge for efficient resource allocation. Because of the geographical spread of resources and their distributed management, grid scheduling is considered to be a NP-complete problem. It has been shown that evolutionary algorithms offer good performance for grid scheduling. This article uses a new evaluation (distributed algorithm inspired by the effect of leaders in social groups, the group leaders' optimization algorithm (GLOA, to solve the problem of scheduling independent tasks in a grid computing system. Simulation results comparing GLOA with several other evaluation algorithms show that GLOA produces shorter makespans.

  6. Parallel generation of architecture on the GPU

    KAUST Repository

    Steinberger, Markus; Kenzel, Michael; Kainz, Bernhard K.; Mü ller, Jö rg; Wonka, Peter; Schmalstieg, Dieter

    2014-01-01

    they can take advantage of, or both, our method supports state of the art procedural modeling including stochasticity and context-sensitivity. To increase parallelism, we explicitly express independence in the grammar, reduce inter-rule dependencies

  7. Refinery scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Magalhaes, Marcus V.; Fraga, Eder T. [PETROBRAS, Rio de Janeiro, RJ (Brazil); Shah, Nilay [Imperial College, London (United Kingdom)

    2004-07-01

    This work addresses the refinery scheduling problem using mathematical programming techniques. The solution adopted was to decompose the entire refinery model into a crude oil scheduling and a product scheduling problem. The envelope for the crude oil scheduling problem is composed of a terminal, a pipeline and the crude area of a refinery, including the crude distillation units. The solution method adopted includes a decomposition technique based on the topology of the system. The envelope for the product scheduling comprises all tanks, process units and products found in a refinery. Once crude scheduling decisions are Also available the product scheduling is solved using a rolling horizon algorithm. All models were tested with real data from PETROBRAS' REFAP refinery, located in Canoas, Southern Brazil. (author)

  8. Segment Fixed Priority Scheduling for Self Suspending Real Time Tasks

    Science.gov (United States)

    2016-08-11

    a compute- intensive system such as a self - driving car that we have recently developed [28]. Such systems run computation-demanding algorithms...Applications. In RTSS, 2012. [12] J. Kim et al. Parallel Scheduling for Cyber-Physical Systems: Analysis and Case Study on a Self - Driving Car . In ICCPS...leveraging GPU can be modeled using a multi-segment self -suspending real-time task model. For example, a planning algorithm for autonomous driving can

  9. Selection and scheduling of jobs with time-dependent duration

    OpenAIRE

    DM Seegmuller; SE Visagie; HC de Kock; WJ Pienaar

    2007-01-01

    In this paper two mathematical programming models, both with multiple objective functions, are proposed to solve four related categories of job scheduling problems. All four of these categories have the property that the duration of the jobs is dependent on the time of implementation and in some cases the preceding job. Furthermore, some jobs (restricted to subsets of the total pool of jobs) can, to different extents, run in parallel. In addition, not all the jobs need necessarily be implemen...

  10. Parallel processing of Monte Carlo code MCNP for particle transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Kawasaki, Takuji

    1996-06-01

    It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)

  11. Protocol-transparent resource sharing in hierarchically scheduled real-time systems

    NARCIS (Netherlands)

    Heuvel, van den M.M.H.P.; Bril, R.J.; Lukkien, J.J.

    2010-01-01

    Hierarchical scheduling frameworks (HSFs) provide means for composing complex real-time systems from well-defined, independently analyzed subsystems. To support resource sharing within two-level HSFs, three synchronization protocols based on the stack resource policy (SRP) have recently been

  12. A Parallel Workload Model and its Implications for Processor Allocation

    Science.gov (United States)

    1996-11-01

    with SEV or AVG, both of which can tolerate c = 0.4 { 0.6 before their performance deteriorates signi cantly. On the other hand, Setia [10] has...Sanjeev. K Setia . The interaction between memory allocation and adaptive partitioning in message-passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [11] Sanjeev K. Setia and Satish K. Tripathi. An analysis of several processor

  13. Decoupling algorithms from schedules for easy optimization of image processing pipelines

    OpenAIRE

    Adams, Andrew; Paris, Sylvain; Levoy, Marc; Ragan-Kelley, Jonathan Millar; Amarasinghe, Saman P.; Durand, Fredo

    2012-01-01

    Using existing programming tools, writing high-performance image processing code requires sacrificing readability, portability, and modularity. We argue that this is a consequence of conflating what computations define the algorithm, with decisions about storage and the order of computation. We refer to these latter two concerns as the schedule, including choices of tiling, fusion, recomputation vs. storage, vectorization, and parallelism. We propose a representation for feed-forward imagi...

  14. TME (Task Mapping Editor): tool for executing distributed parallel computing. TME user's manual

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Yamagishi, Nobuhiro; Imamura, Toshiyuki

    2000-03-01

    At the Center for Promotion of Computational Science and Engineering, a software environment PPExe has been developed to support scientific computing on a parallel computer cluster (distributed parallel scientific computing). TME (Task Mapping Editor) is one of components of the PPExe and provides a visual programming environment for distributed parallel scientific computing. Users can specify data dependence among tasks (programs) visually as a data flow diagram and map these tasks onto computers interactively through GUI of TME. The specified tasks are processed by other components of PPExe such as Meta-scheduler, RIM (Resource Information Monitor), and EMS (Execution Management System) according to the execution order of these tasks determined by TME. In this report, we describe the usage of TME. (author)

  15. Monolithic Parallel Tandem Organic Photovoltaic Cell with Transparent Carbon Nanotube Interlayer

    Science.gov (United States)

    Tanaka, S.; Mielczarek, K.; Ovalle-Robles, R.; Wang, B.; Hsu, D.; Zakhidov, A. A.

    2009-01-01

    We demonstrate an organic photovoltaic cell with a monolithic tandem structure in parallel connection. Transparent multiwalled carbon nanotube sheets are used as an interlayer anode electrode for this parallel tandem. The characteristics of front and back cells are measured independently. The short circuit current density of the parallel tandem cell is larger than the currents of each individual cell. The wavelength dependence of photocurrent for the parallel tandem cell shows the superposition spectrum of the two spectral sensitivities of the front and back cells. The monolithic three-electrode photovoltaic cell indeed operates as a parallel tandem with improved efficiency.

  16. On the adequacy of message-passing parallel supercomputers for solving neutron transport problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1990-01-01

    A coarse-grained, static-scheduling parallelization of the standard iterative scheme used for solving the discrete-ordinates approximation of the neutron transport equation is described. The parallel algorithm is based on a decomposition of the angular domain along the discrete ordinates, thus naturally producing a set of completely uncoupled systems of equations in each iteration. Implementation of the parallel code on Intcl's iPSC/2 hypercube, and solutions to test problems are presented as evidence of the high speedup and efficiency of the parallel code. The performance of the parallel code on the iPSC/2 is analyzed, and a model for the CPU time as a function of the problem size (order of angular quadrature) and the number of participating processors is developed and validated against measured CPU times. The performance model is used to speculate on the potential of massively parallel computers for significantly speeding up real-life transport calculations at acceptable efficiencies. We conclude that parallel computers with a few hundred processors are capable of producing large speedups at very high efficiencies in very large three-dimensional problems. 10 refs., 8 figs

  17. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  18. A Study on the Enhanced Best Performance Algorithm for the Just-in-Time Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Sivashan Chetty

    2015-01-01

    Full Text Available The Just-In-Time (JIT scheduling problem is an important subject of study. It essentially constitutes the problem of scheduling critical business resources in an attempt to optimize given business objectives. This problem is NP-Hard in nature, hence requiring efficient solution techniques. To solve the JIT scheduling problem presented in this study, a new local search metaheuristic algorithm, namely, the enhanced Best Performance Algorithm (eBPA, is introduced. This is part of the initial study of the algorithm for scheduling problems. The current problem setting is the allocation of a large number of jobs required to be scheduled on multiple and identical machines which run in parallel. The due date of a job is characterized by a window frame of time, rather than a specific point in time. The performance of the eBPA is compared against Tabu Search (TS and Simulated Annealing (SA. SA and TS are well-known local search metaheuristic algorithms. The results show the potential of the eBPA as a metaheuristic algorithm.

  19. Information criteria for quantifying loss of reversibility in parallelized KMC

    Energy Technology Data Exchange (ETDEWEB)

    Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu

    2017-01-01

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.

  20. Parallel processor for fast event analysis

    International Nuclear Information System (INIS)

    Hensley, D.C.

    1983-01-01

    Current maximum data rates from the Spin Spectrometer of approx. 5000 events/s (up to 1.3 MBytes/s) and minimum analysis requiring at least 3000 operations/event require a CPU cycle time near 70 ns. In order to achieve an effective cycle time of 70 ns, a parallel processing device is proposed where up to 4 independent processors will be implemented in parallel. The individual processors are designed around the Am2910 Microsequencer, the AM29116 μP, and the Am29517 Multiplier. Satellite histogramming in a mass memory system will be managed by a commercial 16-bit μP system

  1. Hybrid glowworm swarm optimization for task scheduling in the cloud environment

    Science.gov (United States)

    Zhou, Jing; Dong, Shoubin

    2018-06-01

    In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.

  2. An FMS Dynamic Production Scheduling Algorithm Considering Cutting Tool Failure and Cutting Tool Life

    International Nuclear Information System (INIS)

    Setiawan, A; Wangsaputra, R; Halim, A H; Martawirya, Y Y

    2016-01-01

    This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule. (paper)

  3. Individual differences in strategic flight management and scheduling

    Science.gov (United States)

    Wickens, Christopher D.; Raby, Mireille

    1991-01-01

    A group of 30 instrument-rated pilots was made to fly simulator approaches to three airports under conditions of low, medium, and high workload conditions. An analysis is presently conducted of the difference in discrete task scheduling between the group of 10 highest and 10 lowest performing pilots in the sample; this categorization was based on the mean of various flight-profile measures. The two groups were found to differ from each other only in terms of the time when specific events were conducted, and of the optimality of scheduling for certain high-priority tasks. These results are assessed in view of the relative independence of task-management skills from aircraft-control skills.

  4. Scheduling Two-Sided Transformations Using Tile Algorithms on Multicore Architectures

    Directory of Open Access Journals (Sweden)

    Hatem Ltaief

    2010-01-01

    Full Text Available The objective of this paper is to describe, in the context of multicore architectures, three different scheduler implementations for the two-sided linear algebra transformations, in particular the Hessenberg and Bidiagonal reductions which are the first steps for the standard eigenvalue problems and the singular value decompositions respectively. State-of-the-art dense linear algebra softwares, such as the LAPACK and ScaLAPACK libraries, suffer performance losses on multicore processors due to their inability to fully exploit thread-level parallelism. At the same time the fine-grain dataflow model gains popularity as a paradigm for programming multicore architectures. Buttari et al. (Parellel Comput. Syst. Appl. 35 (2009, 38–53 introduced the concept of tile algorithms in which parallelism is no longer hidden inside Basic Linear Algebra Subprograms but is brought to the fore to yield much better performance. Along with efficient scheduling mechanisms for data-driven execution, these tile two-sided reductions achieve high performance computing by reaching up to 75% of the DGEMM peak on a 12000×12000 matrix with 16 Intel Tigerton 2.4 GHz processors. The main drawback of the tile algorithms approach for two-sided transformations is that the full reduction cannot be obtained in one stage. Other methods have to be considered to further reduce the band matrices to the required forms.

  5. Coordination between Generation and Transmission Maintenance Scheduling by Means of Multi-agent Technique

    Science.gov (United States)

    Nagata, Takeshi; Tao, Yasuhiro; Utatani, Masahiro; Sasaki, Hiroshi; Fujita, Hideki

    This paper proposes a multi-agent approach to maintenance scheduling in restructured power systems. The restructuring of electric power industry has resulted in market-based approaches for unbundling a multitude of service provided by self-interested entities such as power generating companies (GENCOs), transmission providers (TRANSCOs) and distribution companies (DISCOs). The Independent System Operator (ISO) is responsible for the security of the system operation. The schedule submitted to ISO by GENCOs and TRANSCOs should satisfy security and reliability constraints. The proposed method consists of several GENCO Agents (GAGs), TARNSCO Agents (TAGs) and a ISO Agent(IAG). The IAG’s role in maintenance scheduling is limited to ensuring that the submitted schedules do not cause transmission congestion or endanger the system reliability. From the simulation results, it can be seen the proposed multi-agent approach could coordinate between generation and transmission maintenance schedules.

  6. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  7. Optimal parallel algorithms for problems modeled by a family of intervals

    Science.gov (United States)

    Olariu, Stephan; Schwing, James L.; Zhang, Jingyuan

    1992-01-01

    A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

  8. Scheduling a maintenance activity under skills constraints to minimize total weighted tardiness and late tasks

    Directory of Open Access Journals (Sweden)

    Djalal Hedjazi

    2015-04-01

    Full Text Available Skill management is a key factor in improving effectiveness of industrial companies, notably their maintenance services. The problem considered in this paper concerns scheduling of maintenance tasks under resource (maintenance teams constraints. This problem is generally known as unrelated parallel machine scheduling. We consider the problem with a both objectives of minimizing total weighted tardiness (TWT and number of tardiness tasks. Our interest is focused particularly on solving this problem under skill constraints, which each resource has a skill level. So, we propose a new efficient heuristic to obtain an approximate solution for this NP-hard problem and demonstrate his effectiveness through computational experiments. This heuristic is designed for implementation in a static maintenance scheduling problem (with unequal release dates, processing times and resource skills, while minimizing objective functions aforementioned.

  9. Parallelization of ITOUGH2 using PVM

    International Nuclear Information System (INIS)

    Finsterle, Stefan

    1998-01-01

    ITOUGH2 inversions are computationally intensive because the forward problem must be solved many times to evaluate the objective function for different parameter combinations or to numerically calculate sensitivity coefficients. Most of these forward runs are independent from each other and can therefore be performed in parallel. Message passing based on the Parallel Virtual Machine (PVM) system has been implemented into ITOUGH2 to enable parallel processing of ITOUGH2 jobs on a heterogeneous network of Unix workstations. This report describes the PVM system and its implementation into ITOUGH2. Instructions are given for installing PVM, compiling ITOUGH2-PVM for use on a workstation cluster, the preparation of an 1.TOUGH2 input file under PVM, and the execution of an ITOUGH2-PVM application. Examples are discussed, demonstrating the use of ITOUGH2-PVM

  10. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  11. Investigation of implementing a synchronization protocol under multiprocessors hierarchical scheduling

    NARCIS (Netherlands)

    Nemati, F.; Behnam, M.; Bril, R.J.

    2009-01-01

    In the multi-core and multiprocessor domain, there has been considerable work done on scheduling techniques assuming that real-time tasks are independent. In practice a typical real-time system usually share logical resources among tasks. However, synchronization in the multiprocessor area has not

  12. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  13. Single machine scheduling with time-dependent linear deterioration and rate-modifying maintenance

    OpenAIRE

    Rustogi, Kabir; Strusevich, Vitaly A.

    2015-01-01

    We study single machine scheduling problems with linear time-dependent deterioration effects and maintenance activities. Maintenance periods (MPs) are included into the schedule, so that the machine, that gets worse during the processing, can be restored to a better state. We deal with a job-independent version of the deterioration effects, that is, all jobs share a common deterioration rate. However, we introduce a novel extension to such models and allow the deterioration rates to change af...

  14. Sharing Data for Production Scheduling Using the ISA-95 Standard

    Energy Technology Data Exchange (ETDEWEB)

    Harjunkoski, Iiro, E-mail: iiro.harjunkoski@de.abb.com; Bauer, Reinhard [ABB Corporate Research, Industrial Software and Applications, Ladenburg (Germany)

    2014-10-21

    In the development and deployment of production scheduling solutions, one major challenge is to establish efficient information sharing with industrial production management systems. Information comprising production orders to be scheduled, processing plant structure, product recipes, available equipment, and other resources are necessary for producing a realistic short-term production plan. Currently, a widely accepted standard for information sharing is missing. This often leads to the implementation of costly custom-tailored interfaces, or in the worst case the scheduling solution will be abandoned. Additionally, it becomes difficult to easily compare different methods on various problem instances, which complicates the re-use of existing scheduling solutions. In order to overcome these hurdles, a platform-independent and holistic approach is needed. Nevertheless, it is difficult for any new solution to gain wide acceptance within industry as new standards are often refused by companies already using a different established interface. From an acceptance point of view, the ISA-95 standard could act as a neutral data-exchange platform. In this paper, we assess if this already widespread standard is simple, yet powerful enough to act as the desired holistic data exchange for scheduling solutions.

  15. Sharing Data for Production Scheduling Using the ISA-95 Standard

    International Nuclear Information System (INIS)

    Harjunkoski, Iiro; Bauer, Reinhard

    2014-01-01

    In the development and deployment of production scheduling solutions, one major challenge is to establish efficient information sharing with industrial production management systems. Information comprising production orders to be scheduled, processing plant structure, product recipes, available equipment, and other resources are necessary for producing a realistic short-term production plan. Currently, a widely accepted standard for information sharing is missing. This often leads to the implementation of costly custom-tailored interfaces, or in the worst case the scheduling solution will be abandoned. Additionally, it becomes difficult to easily compare different methods on various problem instances, which complicates the re-use of existing scheduling solutions. In order to overcome these hurdles, a platform-independent and holistic approach is needed. Nevertheless, it is difficult for any new solution to gain wide acceptance within industry as new standards are often refused by companies already using a different established interface. From an acceptance point of view, the ISA-95 standard could act as a neutral data-exchange platform. In this paper, we assess if this already widespread standard is simple, yet powerful enough to act as the desired holistic data exchange for scheduling solutions.

  16. Sharing data for production scheduling using the ISA-95 standard

    Directory of Open Access Journals (Sweden)

    Iiro eHarjunkoski

    2014-10-01

    Full Text Available In the development and deployment of production scheduling solutions one major challenge is to establish efficient information sharing with industrial production management systems. Information comprising production orders to be scheduled, processing plant structure, product recipes, available equipment and other resources are necessary for producing a realistic short-term production plan. Currently, a widely-accepted standard for information sharing is missing. This often leads to the implementation of costly custom-tailored interfaces, or in the worst case the scheduling solution will be abandoned. Additionally, it becomes difficult to easily compare different methods on various problem instances, which complicates the re-use of existing scheduling solutions. In order to overcome these hurdles, a platform-independent and holistic approach is needed. Nevertheless, it is difficult for any new solution to gain wide acceptance within industry as new standards are often refused by companies already using a different established interface. From an acceptance point of view, the ISA-95 standard could act as a neutral data-exchange platform. In this paper, we assess if this already widespread standard is simple, yet powerful enough to act as the desired holistic data-exchange for scheduling solutions.

  17. REPAIR SHOP JOB SCHEDULING WITH PARALLEL OPERATORS AND MULTIPLE CONSTRAINTS USING SIMULATED ANNEALING

    Directory of Open Access Journals (Sweden)

    N. Shivasankaran

    2013-04-01

    Full Text Available Scheduling problems are generally treated as NP andash; complete combinatorial optimization problems which is a multi-objective and multi constraint one. Repair shop Job sequencing and operator allocation is one such NP andash; complete problem. For such problems, an efficient technique is required that explores a wide range of solution space. This paper deals with Simulated Annealing Technique, a Meta - heuristic to solve the complex Car Sequencing and Operator Allocation problem in a car repair shop. The algorithm is tested with several constraint settings and the solution quality exceeds the results reported in the literature with high convergence speed and accuracy. This algorithm could be considered as quite effective while other heuristic routine fails.

  18. Selection and scheduling of jobs with time-dependent duration

    Directory of Open Access Journals (Sweden)

    DM Seegmuller

    2007-06-01

    Full Text Available In this paper two mathematical programming models, both with multiple objective functions, are proposed to solve four related categories of job scheduling problems. All four of these categories have the property that the duration of the jobs is dependent on the time of implementation and in some cases the preceding job. Furthermore, some jobs (restricted to subsets of the total pool of jobs can, to different extents, run in parallel. In addition, not all the jobs need necessarily be implemented during the given time period.

  19. An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints

    OpenAIRE

    Yunqing Rao; Dezhong Qi; Jinling Li

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better ...

  20. Instructions, multiple schedules, and extinction: Distinguishing rule-governed from schedule-controlled behavior.

    Science.gov (United States)

    Hayes, S C; Brownstein, A J; Haas, J R; Greenway, D E

    1986-09-01

    Schedule sensitivity has usually been examined either through a multiple schedule or through changes in schedules after steady-state responding has been established. This study compared the effects of these two procedures when various instructions were given. Fifty-five college students responded in two 32-min sessions under a multiple fixed-ratio 18/differential-reinforcement-of-low-rate 6-s schedule, followed by one session of extinction. Some subjects received no instructions regarding the appropriate rates of responding, whereas others received instructions to respond slowly, rapidly, or both. Relative to the schedule in operation, the instructions were minimal, partially inaccurate, or accurate. When there was little schedule sensitivity in the multiple schedule, there was little in extinction. When apparently schedule-sensitive responding occurred in the multiple schedule, however, sensitivity in extinction occurred only if differential responding in the multiple schedule could not be due to rules supplied by the experimenter. This evidence shows that rule-governed behavior that occurs in the form of schedule-sensitive behavior may not in fact become schedule-sensitive even though it makes contact with the scheduled reinforcers.

  1. Extending an open-source real-time operating system with hierarchical scheduling

    NARCIS (Netherlands)

    Holenderski, M.J.; Cools, W.A.; Bril, R.J.; Lukkien, J.J.

    2010-01-01

    Hierarchical scheduling frameworks (HSFs) have been devised to support the integration of independently developed and analyzed subsystems. This paper presents an efficient, modular and extendible design for enhancing a real-time operating system with periodic tasks, two-level fixed-priority HSF

  2. Real-time scheduling of software tasks

    International Nuclear Information System (INIS)

    Hoff, L.T.

    1995-01-01

    When designing real-time systems, it is often desirable to schedule execution of software tasks based on the occurrence of events. The events may be clock ticks, interrupts from a hardware device, or software signals from other software tasks. If the nature of the events, is well understood, this scheduling is normally a static part of the system design. If the nature of the events is not completely understood, or is expected to change over time, it may be necessary to provide a mechanism for adjusting the scheduling of the software tasks. RHIC front-end computers (FECs) provide such a mechanism. The goals in designing this mechanism were to be as independent as possible of the underlying operating system, to allow for future expansion of the mechanism to handle new types of events, and to allow easy configuration. Some considerations which steered the design were programming paradigm (object oriented vs. procedural), programming language, and whether events are merely interesting moments in time, or whether they intrinsically have data associated with them. The design also needed to address performance and robustness tradeoffs involving shared task contexts, task priorities, and use of interrupt service routine (ISR) contexts vs. task contexts. This paper will explore these considerations and tradeoffs

  3. DLTAP: A Network-efficient Scheduling Method for Distributed Deep Learning Workload in Containerized Cluster Environment

    Directory of Open Access Journals (Sweden)

    Qiao Wei

    2017-01-01

    Full Text Available Deep neural networks (DNNs have recently yielded strong results on a range of applications. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. Furthermore, putting DNN tasks into containers of clusters would enable broader and easier deployment of DNN-based algorithms. Toward this end, this paper addresses the problem of scheduling DNN tasks in the containerized cluster environment. Efficiently scheduling data-parallel computation jobs like DNN over containerized clusters is critical for job performance, system throughput, and resource utilization. It becomes even more challenging with the complex workloads. We propose a scheduling method called Deep Learning Task Allocation Priority (DLTAP which performs scheduling decisions in a distributed manner, and each of scheduling decisions takes aggregation degree of parameter sever task and worker task into account, in particularly, to reduce cross-node network transmission traffic and, correspondingly, decrease the DNN training time. We evaluate the DLTAP scheduling method using a state-of-the-art distributed DNN training framework on 3 benchmarks. The results show that the proposed method can averagely reduce 12% cross-node network traffic, and decrease the DNN training time even with the cluster of low-end servers.

  4. An efficient parallel algorithm for matrix-vector multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  5. The FORCE: A portable parallel programming language supporting computational structural mechanics

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Brehm, Juergen; Ramanan, Aruna

    1989-01-01

    This project supports the conversion of codes in Computational Structural Mechanics (CSM) to a parallel form which will efficiently exploit the computational power available from multiprocessors. The work is a part of a comprehensive, FORTRAN-based system to form a basis for a parallel version of the NICE/SPAR combination which will form the CSM Testbed. The software is macro-based and rests on the force methodology developed by the principal investigator in connection with an early scientific multiprocessor. Machine independence is an important characteristic of the system so that retargeting it to the Flex/32, or any other multiprocessor on which NICE/SPAR might be imnplemented, is well supported. The principal investigator has experience in producing parallel software for both full and sparse systems of linear equations using the force macros. Other researchers have used the Force in finite element programs. It has been possible to rapidly develop software which performs at maximum efficiency on a multiprocessor. The inherent machine independence of the system also means that the parallelization will not be limited to a specific multiprocessor.

  6. Schedules of Controlled Substances: Temporary Placement of 4-Fluoroisobutyryl Fentanyl into Schedule I. Temporary scheduling order.

    Science.gov (United States)

    2017-05-03

    The Administrator of the Drug Enforcement Administration is issuing this temporary scheduling order to schedule the synthetic opioid, N-(4-fluorophenyl)-N-(1-phenethylpiperidin-4-yl)isobutyramide (4-fluoroisobutyryl fentanyl or para-fluoroisobutyryl fentanyl), and its isomers, esters, ethers, salts and salts of isomers, esters, and ethers, into schedule I pursuant to the temporary scheduling provisions of the Controlled Substances Act. This action is based on a finding by the Administrator that the placement of 4-fluoroisobutyryl fentanyl into schedule I of the Controlled Substances Act is necessary to avoid an imminent hazard to the public safety. As a result of this order, the regulatory controls and administrative, civil, and criminal sanctions applicable to schedule I controlled substances will be imposed on persons who handle (manufacture, distribute, reverse distribute, import, export, engage in research, conduct instructional activities or chemical analysis, or possess), or propose to handle, 4-fluoroisobutyryl fentanyl.

  7. Parallel Algorithms for Graph Optimization using Tree Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL; Groer, Christopher S [ORNL

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  8. CMS readiness for multi-core workload scheduling

    Science.gov (United States)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  9. CMS Readiness for Multi-Core Workload Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Balcas, J. [Caltech; Hernandez, J. [Madrid, CIEMAT; Aftab Khan, F. [NCP, Islamabad; Letts, J. [UC, San Diego; Mason, D. [Fermilab; Verguilov, V. [CLMI, Sofia

    2017-11-22

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  10. Scheduling techniques in the Request Oriented Scheduling Engine (ROSE)

    Science.gov (United States)

    Zoch, David R.

    1991-01-01

    Scheduling techniques in the ROSE are presented in the form of the viewgraphs. The following subject areas are covered: agenda; ROSE summary and history; NCC-ROSE task goals; accomplishments; ROSE timeline manager; scheduling concerns; current and ROSE approaches; initial scheduling; BFSSE overview and example; and summary.

  11. Feed-forward volume rendering algorithm for moderately parallel MIMD machines

    Science.gov (United States)

    Yagel, Roni

    1993-01-01

    Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.

  12. Deep evolutionary comparison of gene expression identifies parallel recruitment of trans-factors in two independent origins of C4 photosynthesis.

    Science.gov (United States)

    Aubry, Sylvain; Kelly, Steven; Kümpers, Britta M C; Smith-Unna, Richard D; Hibberd, Julian M

    2014-06-01

    With at least 60 independent origins spanning monocotyledons and dicotyledons, the C4 photosynthetic pathway represents one of the most remarkable examples of convergent evolution. The recurrent evolution of this highly complex trait involving alterations to leaf anatomy, cell biology and biochemistry allows an increase in productivity by ∼ 50% in tropical and subtropical areas. The extent to which separate lineages of C4 plants use the same genetic networks to maintain C4 photosynthesis is unknown. We developed a new informatics framework to enable deep evolutionary comparison of gene expression in species lacking reference genomes. We exploited this to compare gene expression in species representing two independent C4 lineages (Cleome gynandra and Zea mays) whose last common ancestor diverged ∼ 140 million years ago. We define a cohort of 3,335 genes that represent conserved components of leaf and photosynthetic development in these species. Furthermore, we show that genes encoding proteins of the C4 cycle are recruited into networks defined by photosynthesis-related genes. Despite the wide evolutionary separation and independent origins of the C4 phenotype, we report that these species use homologous transcription factors to both induce C4 photosynthesis and to maintain the cell specific gene expression required for the pathway to operate. We define a core molecular signature associated with leaf and photosynthetic maturation that is likely shared by angiosperm species derived from the last common ancestor of the monocotyledons and dicotyledons. We show that deep evolutionary comparisons of gene expression can reveal novel insight into the molecular convergence of highly complex phenotypes and that parallel evolution of trans-factors underpins the repeated appearance of C4 photosynthesis. Thus, exploitation of extant natural variation associated with complex traits can be used to identify regulators. Moreover, the transcription factors that are shared by

  13. Deep evolutionary comparison of gene expression identifies parallel recruitment of trans-factors in two independent origins of C4 photosynthesis.

    Directory of Open Access Journals (Sweden)

    Sylvain Aubry

    2014-06-01

    Full Text Available With at least 60 independent origins spanning monocotyledons and dicotyledons, the C4 photosynthetic pathway represents one of the most remarkable examples of convergent evolution. The recurrent evolution of this highly complex trait involving alterations to leaf anatomy, cell biology and biochemistry allows an increase in productivity by ∼ 50% in tropical and subtropical areas. The extent to which separate lineages of C4 plants use the same genetic networks to maintain C4 photosynthesis is unknown. We developed a new informatics framework to enable deep evolutionary comparison of gene expression in species lacking reference genomes. We exploited this to compare gene expression in species representing two independent C4 lineages (Cleome gynandra and Zea mays whose last common ancestor diverged ∼ 140 million years ago. We define a cohort of 3,335 genes that represent conserved components of leaf and photosynthetic development in these species. Furthermore, we show that genes encoding proteins of the C4 cycle are recruited into networks defined by photosynthesis-related genes. Despite the wide evolutionary separation and independent origins of the C4 phenotype, we report that these species use homologous transcription factors to both induce C4 photosynthesis and to maintain the cell specific gene expression required for the pathway to operate. We define a core molecular signature associated with leaf and photosynthetic maturation that is likely shared by angiosperm species derived from the last common ancestor of the monocotyledons and dicotyledons. We show that deep evolutionary comparisons of gene expression can reveal novel insight into the molecular convergence of highly complex phenotypes and that parallel evolution of trans-factors underpins the repeated appearance of C4 photosynthesis. Thus, exploitation of extant natural variation associated with complex traits can be used to identify regulators. Moreover, the transcription factors

  14. Leveraging Independent Management and Chief Engineer Hierarchy: Vertically and Horizontally-Derived Technical Authority Value

    Science.gov (United States)

    Barley, Bryan; Newhouse, Marilyn

    2012-01-01

    In the development of complex spacecraft missions, project management authority is usually extended hierarchically from NASA's highest agency levels down to the implementing institution's project team level, through both the center and the program. In parallel with management authority, NASA utilizes a complementary, but independent, hierarchy of technical authority (TA) that extends from the agency level to the project, again, through both the center and the program. The chief engineers (CEs) who serve in this technical authority capacity oversee and report on the technical status and ensure sound engineering practices, controls, and management of the projects and programs. At the lowest level, implementing institutions assign project CEs to technically engage projects, lead development teams, and ensure sound technical principles, processes, and issue resolution. At the middle level, programs and centers independently use CEs to ensure the technical success of their projects and programs. At the agency level, NASA's mission directorate CEs maintain technical cognizance over every program and project in their directorate and advise directorate management on the technical, cost, schedule, and programmatic health of each. As part of this vertically-extended CE team, a program level CE manages a continually varying balance between penetration depth and breadth across his or her assigned missions. Teamwork issues and information integration become critical for management at all levels to ensure value-added use of both the synergy available between CEs at the various agency levels, and the independence of the technical authority at each organization.

  15. Parallel patterns determination in solving cyclic flow shop problem with setups

    Directory of Open Access Journals (Sweden)

    Bożejko Wojciech

    2017-06-01

    Full Text Available The subject of this work is the new idea of blocks for the cyclic flow shop problem with setup times, using multiple patterns with different sizes determined for each machine constituting optimal schedule of cities for the traveling salesman problem (TSP. We propose to take advantage of the Intel Xeon Phi parallel computing environment during so-called ’blocks’ determination basing on patterns, in effect significantly improving the quality of obtained results.

  16. Multi-Level Round-Robin Multicast Scheduling with Look-Ahead Mechanism

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah Renée; Berger, Michael Stübert

    2011-01-01

    constructs the Traffic Matrix before each cell transmission based on the fan-out vectors of the cells in the queues. A scheduling pointer independently moves on each column of the Traffic Matrix in a round-robin fashion and returns the decision to the Decision Matrix. The sync procedure is carried out...

  17. Using Activity Schedules to Increase On-Task Behavior in Children at Risk for Attention-Deficit/Hyperactivity Disorder

    Science.gov (United States)

    Cirelli, Christe A.; Sidener, Tina M.; Reeve, Kenneth F.; Reeve, Sharon A.

    2016-01-01

    The effects of activity schedules on on-task and on-schedule behavior were assessed with two boys at risk for attention-deficit/hyperactivity disorder (ADHD) and referred by their public school teachers as having difficulty during independent work time. On-task behavior increased for both participants after two training sessions. Teachers, peers,…

  18. Channel access delay and buffer distribution of two-user opportunistic scheduling schemes in wireless networks

    KAUST Repository

    Hossain, Md Jahangir

    2010-07-01

    In our earlier works, we proposed rate adaptive hierarchical modulation-assisted two-best user opportunistic scheduling (TBS) and hybrid two-user scheduling (HTS) schemes. The proposed schemes are innovative in the sense that they include a second user in the transmission opportunistically using hierarchical modulations. As such the frequency of information access of the users increases without any degradation of the system spectral efficiency (SSE) compared to the classical opportunistic scheduling scheme. In this paper, we analyze channel access delay of an incoming packet at the base station (BS) buffer when our proposed TBS and HTS schemes are employed at the BS. Specifically, using a queuing analytic model we derive channel access delay as well as buffer distribution of the packets that wait at BS buffer for down-link (DL) transmission. We compare performance of the TBS and HTS schemes with that of the classical single user opportunistic schemes namely, absolute carrier-to-noise ratio (CNR)-based single user scheduling (ASS) and normalized CNR-based single user scheduling (NSS). For an independent and identically distributed (i.i.d.) fading environment, our proposed scheme can improve packet\\'s access delay performance compared to the ASS. Selected numerical results in an independent but non-identically distributed (i.n.d.) fading environment show that our proposed HTS achieves overall good channel access delay performance. © 2010 IEEE.

  19. Contrast and autoshaping in multiple schedules varying reinforcer rate and duration.

    Science.gov (United States)

    Hamilton, B E; Silberberg, A

    1978-07-01

    Thirteen master pigeons were exposed to multiple schedules in which reinforcement frequency (Experiment I) or duration (Experiment II) was varied. In Phases 1 and 3 of Experiment I, the values of the first and second components' random-interval schedules were 33 and 99 seconds, respectively. In Phase 2, these values were 99 seconds for both components. In Experiment II, a random-interval 33-second schedule was associated with each component. During Phases 1 and 3, the first and second components had hopper durations of 7.5 and 2.5 seconds respectively. During Phase 2, both components' hopper durations were 2.5 seconds. In each experiment, positive contrast obtained for about half the master subjects. The rest showed a rate increase in both components (positive induction). Each master subject's key colors and reinforcers were synchronously presented on a response-independent basis to a yoked control. Richer component key-pecking occurred during each experiment's Phases 1 and 3 among half these subjects. However, none responded during the contrast condition (unchanged component of each experiment's Phase 2). From this it is inferred that autoshaping did not contribute to the contrast and induction findings among master birds. Little evidence of local contrast (highest rate at beginning of richer component) was found in any subject. These data show that (a) contrast can occur independently from autoshaping, (b) contrast assays during equal-valued components may produce induction, (c) local contrast in multiple schedules often does not occur, and (d) differential hopper durations can produce autoshaping and contrast.

  20. 2007 Wholesale Power Rate Schedules : 2007 General Rate Schedule Provisions.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    2006-11-01

    This schedule is available for the contract purchase of Firm Power to be used within the Pacific Northwest (PNW). Priority Firm (PF) Power may be purchased by public bodies, cooperatives, and Federal agencies for resale to ultimate consumers, for direct consumption, and for Construction, Test and Start-Up, and Station Service. Rates in this schedule are in effect beginning October 1, 2006, and apply to purchases under requirements Firm Power sales contracts for a three-year period. The Slice Product is only available for public bodies and cooperatives who have signed Slice contracts for the FY 2002-2011 period. Utilities participating in the Residential Exchange Program (REP) under Section 5(c) of the Northwest Power Act may purchase Priority Firm Power pursuant to the Residential Exchange Program. Rates under contracts that contain charges that escalate based on BPA's Priority Firm Power rates shall be based on the three-year rates listed in this rate schedule in addition to applicable transmission charges. This rate schedule supersedes the PF-02 rate schedule, which went into effect October 1, 2001. Sales under the PF-07 rate schedule are subject to BPA's 2007 General Rate Schedule Provisions (2007 GRSPs). Products available under this rate schedule are defined in the 2007 GRSPs. For sales under this rate schedule, bills shall be rendered and payments due pursuant to BPA's 2007 GRSPs and billing process.

  1. An Evaluation of Photographic Activity Schedules to Increase Independent Playground Skills in Young Children with Autism

    Science.gov (United States)

    Akers, Jessica S.; Higbee, Thomas S.; Pollard, Joy S.; Pellegrino, Azure J.; Gerencser, Kristina R.

    2016-01-01

    We used photographic activity schedules to increase the number of play activities completed by children with autism during unstructured time on the playground. All 3 participants engaged in more playground activities during and after training, and they continued to complete activities when novel photographs were introduced.

  2. Responsive versus scheduled feeding for preterm infants

    OpenAIRE

    Watson, Julie; McGuire, William

    2016-01-01

    Version 5\\ud Background\\ud \\ud Feeding preterm infants in response to their hunger and satiation cues (responsive, cue-based, or infant-led feeding) rather than at scheduled intervals might enhance infants' and parents' experience and satisfaction, help in the establishment of independent oral feeding, increase nutrient intake and growth rates, and allow earlier hospital discharge.\\ud \\ud \\ud Objectives\\ud \\ud To assess the effect of a policy of feeding preterm infants on a responsive basis v...

  3. Assessing the Predictability of Scheduled-Vehicle Travel Times

    DEFF Research Database (Denmark)

    Tiesyte, Dalia; Jensen, Christian Søndergaard

    2009-01-01

    One of the most desired and challenging services in collective transport systems is the real-time prediction of the near-future travel times of scheduled vehicles, especially public buses, thus improving the experience of the transportation users, who may be able to better schedule their travel......, and also enabling system operators to perform real-time monitoring. While travel-time prediction has been researched extensively during the past decade, the accuracies of existing techniques fall short of what is desired, and proposed mathematical prediction models are often not transferable to other...... systems because the properties of the travel-time-related data of vehicles are highly context-dependent, making the models difficult to fit. We propose a framework for evaluating various predictability types of the data independently of the model, and we also compare predictability analysis results...

  4. Project management with dynamic scheduling baseline scheduling, risk analysis and project control

    CERN Document Server

    Vanhoucke, Mario

    2013-01-01

    The topic of this book is known as dynamic scheduling, and is used to refer to three dimensions of project management and scheduling: the construction of a baseline schedule and the analysis of a project schedule's risk as preparation of the project control phase during project progress. This dynamic scheduling point of view implicitly assumes that the usability of a project's baseline schedule is rather limited and only acts as a point of reference in the project life cycle.

  5. NASA scheduling technologies

    Science.gov (United States)

    Adair, Jerry R.

    1994-01-01

    This paper is a consolidated report on ten major planning and scheduling systems that have been developed by the National Aeronautics and Space Administration (NASA). A description of each system, its components, and how it could be potentially used in private industry is provided in this paper. The planning and scheduling technology represented by the systems ranges from activity based scheduling employing artificial intelligence (AI) techniques to constraint based, iterative repair scheduling. The space related application domains in which the systems have been deployed vary from Space Shuttle monitoring during launch countdown to long term Hubble Space Telescope (HST) scheduling. This paper also describes any correlation that may exist between the work done on different planning and scheduling systems. Finally, this paper documents the lessons learned from the work and research performed in planning and scheduling technology and describes the areas where future work will be conducted.

  6. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  7. Morphology Independent Learning in Modular Robots

    DEFF Research Database (Denmark)

    Christensen, David Johan; Bordignon, Mirko; Schultz, Ulrik Pagh

    2009-01-01

    speed its modules independently and in parallel adjust their behavior based on a single global reward signal. In simulation, we study the learning strategy’s performance on different robot configurations. On the physical platform, we perform learning experiments with ATRON robots learning to move as fast...

  8. Morphology Independent Learning in Modular Robots

    DEFF Research Database (Denmark)

    Christensen, David Johan; Bordignon, Mirko; Schultz, Ulrik Pagh

    2009-01-01

    speed its modules independently and in parallel adjust their behavior based on a single global reward signal. In simulation, we study the learning strategy?s performance on different robot con?gurations. On the physical platform, we perform learning experiments with ATRON robots learning to move as fast...

  9. 75 FR 42831 - Proposed Collection; Comment Request for Form 1065, Schedule C, Schedule D, Schedule K-1...

    Science.gov (United States)

    2010-07-22

    .../or continuing information collections, as required by the Paperwork Reduction Act of 1995, Public Law... Income, Credits, Deductions and Other Items), Schedule L (Balance Sheets per Books), Schedule M-1 (Reconciliation of Income (Loss) per Books With Income (Loss) per Return)), Schedule M-2 (Analysis of Partners...

  10. Parallel workflow tools to facilitate human brain MRI post-processing

    Directory of Open Access Journals (Sweden)

    Zaixu eCui

    2015-05-01

    Full Text Available Multi-modal magnetic resonance imaging (MRI techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues.

  11. Schedule Control and Nursing Home Quality: Exploratory Evidence of a Psychosocial Predictor of Resident Care.

    Science.gov (United States)

    Hurtado, David A; Berkman, Lisa F; Buxton, Orfeu M; Okechukwu, Cassandra A

    2016-02-01

    To examine whether nursing homes' quality of care was predicted by schedule control (workers' ability to decide work hours), independently of other staffing characteristics. Prospective ecological study of 30 nursing homes in New England. Schedule control was self-reported via survey in 2011-2012 (N = 1,045). Quality measures included the prevalence of decline in activities of daily living, residents' weight loss, and pressure ulcers, indicators systematically linked with staffing characteristics. Outcomes data for 2012 were retrieved from Medicare.gov. Robust Linear Regressions showed that higher schedule control predicted lower prevalence of pressure ulcers (β = -0.51, p job satisfaction, and turnover intentions. Higher schedule control might enhance the planning and delivery of strategies to prevent or cure pressure ulcers. Further research is needed to identify potential causal mechanisms by which schedule control could improve quality of care. © The Author(s) 2014.

  12. Analysis of Issues for Project Scheduling by Multiple, Dispersed Schedulers (distributed Scheduling) and Requirements for Manual Protocols and Computer-based Support

    Science.gov (United States)

    Richards, Stephen F.

    1991-01-01

    Although computerized operations have significant gains realized in many areas, one area, scheduling, has enjoyed few benefits from automation. The traditional methods of industrial engineering and operations research have not proven robust enough to handle the complexities associated with the scheduling of realistic problems. To address this need, NASA has developed the computer-aided scheduling system (COMPASS), a sophisticated, interactive scheduling tool that is in wide-spread use within NASA and the contractor community. Therefore, COMPASS provides no explicit support for the large class of problems in which several people, perhaps at various locations, build separate schedules that share a common pool of resources. This research examines the issue of distributing scheduling, as applied to application domains characterized by the partial ordering of tasks, limited resources, and time restrictions. The focus of this research is on identifying issues related to distributed scheduling, locating applicable problem domains within NASA, and suggesting areas for ongoing research. The issues that this research identifies are goals, rescheduling requirements, database support, the need for communication and coordination among individual schedulers, the potential for expert system support for scheduling, and the possibility of integrating artificially intelligent schedulers into a network of human schedulers.

  13. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  14. Scheduling Broadcasts in a Network of Timelines

    KAUST Repository

    Manzoor, Emaad A.

    2015-05-12

    Broadcasts and timelines are the primary mechanism of information exchange in online social platforms today. Services like Facebook, Twitter and Instagram have enabled ordinary people to reach large audiences spanning cultures and countries, while their massive popularity has created increasingly competitive marketplaces of attention. Timing broadcasts to capture the attention of such geographically diverse audiences has sparked interest from many startups and social marketing gurus. However, formal study is lacking on both the timing and frequency problems. In this thesis, we introduce, motivate and solve the broadcast scheduling problem of specifying the timing and frequency of publishing content to maximise the attention received. We validate and quantify three interacting behavioural phenomena to parametrise social platform users: information overload, bursty circadian rhythms and monotony aversion, which is defined here for the first time. Our analysis of the influence of monotony refutes the common assumption that posts on social network timelines are consumed piecemeal independently. Instead, we reveal that posts are consumed in chunks, which has important consequences for any future work considering human behaviour over social network timelines. Our quantification of monotony aversion is also novel, and has applications to problems in various domains such as recommender list diversification, user satiation and variety-seeking consumer behaviour. Having studied the underlying behavioural phenomena, we link schedules, timelines, attention and behaviour by formalising a timeline information exchange process. Our formulation gives rise to a natural objective function that quantifies the expected collective attention an arrangement of posts on a timeline will receive. We apply this formulation as a case-study on real-data from Twitter, where we estimate behavioural parameters, calculate the attention potential for different scheduling strategies and, using the

  15. Scheduler-Specific Confidentiality for Multi-Threaded Programs and Its Logic-Based Verification

    NARCIS (Netherlands)

    Huisman, Marieke; Ngo, Minh Tri; Beckert, B.; Damiani, F.; Gurov, D.

    2012-01-01

    Observational determinism has been proposed in the literature as a way to ensure condentiality for multi-threaded programs. Intuitively, a program is observationally deterministic if the behavior of the public variables is deterministic, i.e., independent of the private variables and the scheduling

  16. Madness and sanity at the time of Indian independence

    Science.gov (United States)

    Jain, Sanjeev; Murthy, Pratima; Sarin, Alok

    2016-01-01

    The backdrop of the Indian Independence offers glimpses of many ‘metaphors of madness’. In this article, we explore this through a few instances, starting from 1857, around the time of the First War of Independence, to 1947, when India became an independent nation. Such metaphors have their parallels both in historical as well as in contemporary times, where instances of one man's imagination becoming another's concept of irrationality and insanity continue. PMID:28066017

  17. Accelerating exact schedulability analysis for fixed-priority pre-emptive scheduling

    NARCIS (Netherlands)

    Hang, Y.; Jiale, Z.; Keskin, U.; Bril, R.J.

    2010-01-01

    The schedulability analysis for fixed-priority preemptive scheduling (FPPS) plays a significant role in the real-time systems domain. The so-called Hyperplanes Exact Test (HET) [1] is an example of an exact schedulability test for FPPS. In this paper, we aim at improving the efficiency of HET by

  18. Energy Efficient Scheduling of Real Time Signal Processing Applications through Combined DVFS and DPM

    OpenAIRE

    Nogues , Erwan; Pelcat , Maxime; Menard , Daniel; Mercat , Alexandre

    2016-01-01

    International audience; This paper proposes a framework to design energy efficient signal processing systems. The energy efficiency is provided by combining Dynamic Frequency and Voltage Scaling (DVFS) and Dynamic Power Management (DPM). The framework is based on Synchronous Dataflow (SDF) modeling of signal processing applications. A transformation to a single rate form is performed to expose the application parallelism. An automated scheduling is then performed, minimizing the constraint of...

  19. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    Science.gov (United States)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  20. Overcoming barriers to scheduling embedded generation to support distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Wright, A.J.; Formby, J.R.

    2000-07-01

    Current scheduling of embedded generation for distribution in the UK is limited and patchy. Some DNOs actively schedule while others do none. The literature on the subject is mainly about accommodating volatile wind output, and optimising island systems, for both cost of supply and network stability. The forthcoming NETA will lower prices, expose unpredictable generation to imbalance markets and could introduce punitive constraint payments on DNOs, but at the same time create a dynamic market for both power and ancillary services from embedded generators. Most renewable generators either run as base load (e.g. waste ) or according to the vagaries of the weather (e.g. wind, hydro), so offer little scope for scheduling other than 'off'. CHP plant is normally heat- led for industrial processes or building needs, but supplementary firing or thermal storage often allow considerable scope for scheduling. Micro-CHP with thermal storage could provide short-term scheduling, but tends to be running anyway during the evening peak. Standby generation appears to be ideal for scheduling, but in practice operators may be unwilling to run parallel with the network, and noise and pollution problems may preclude frequent operation. Statistical analysis can be applied to calculate the reliability of several generators compared to one; with a large number of generators such as micro-CHP reliability of a proportion of load is close to unity. The type of communication for generation used will depend on requirements for bandwidth, cost, reliability and whether it is bundled with other services. With high levels of deeply embedded, small-scale generation using induction machines, voltage control and black start capability will become important concerns on 11 kV and LV networks. This will require increased generation monitoring and remote control of switchgear. Examples of cost benefits from scheduling are given, including deferred reinforcement, increased exports on non

  1. Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines

    International Nuclear Information System (INIS)

    Hunter, M.A.; Haghighat, A.

    1993-01-01

    Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)

  2. A Multiagent Evolutionary Algorithm for the Resource-Constrained Project Portfolio Selection and Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Yongyi Shou

    2014-01-01

    Full Text Available A multiagent evolutionary algorithm is proposed to solve the resource-constrained project portfolio selection and scheduling problem. The proposed algorithm has a dual level structure. In the upper level a set of agents make decisions to select appropriate project portfolios. Each agent selects its project portfolio independently. The neighborhood competition operator and self-learning operator are designed to improve the agent’s energy, that is, the portfolio profit. In the lower level the selected projects are scheduled simultaneously and completion times are computed to estimate the expected portfolio profit. A priority rule-based heuristic is used by each agent to solve the multiproject scheduling problem. A set of instances were generated systematically from the widely used Patterson set. Computational experiments confirmed that the proposed evolutionary algorithm is effective for the resource-constrained project portfolio selection and scheduling problem.

  3. Web Publishing Schedule

    Science.gov (United States)

    Section 207(f)(2) of the E-Gov Act requires federal agencies to develop an inventory and establish a schedule of information to be published on their Web sites, make those schedules available for public comment. To post the schedules on the web site.

  4. Planning and Scheduling for Environmental Sensor Networks

    Science.gov (United States)

    Frank, J. D.

    2005-12-01

    resources and to reduce the costs of communication. Planning and scheduling is generally a heavy consumer of time, memory and energy resources. This means careful thought must be given to how much planning and scheduling should be done on the sensors themselves, and how much to do elsewhere. The difficulty of planning and scheduling is exacerbated when reasoning about uncertainty. More time, memory and energy is needed to solve such problems, leading either to more expensive sensors, or suboptimal plans. For example, scientifically interesting events may happen at random times, making it difficult to ensure that sufficient resources are availanble. Since uncertainty is usually lowest in proximity to the sensors themselves, this argues for planning and scheduling onboard the sensors. However, cost minimization dictates sensors be kept as simple as possible, reducing the amount of planning and scheduling they can do themselves. Furthermore, coordinating each sensor's independent plans can be difficult. In the full presentation, we will critically review the planning and scheduling systems used by previously fielded sensor networks. We do so primarily from the perspective of the computational sciences, with a focus on taming computational complexity when operating sensor networks. The case studies are derived from sensor networks based on UAVs, satellites, and planetary rovers. Planning and scheduling considerations include multi-sensor coordination, optimizing science value, onboard power management, onboard memory, planning movement actions to acquire data, and managing communications.These case studies offer lessons for future designs of environmental sensor networks.

  5. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  6. Single-machine scheduling with release dates, due dates and family setup times

    NARCIS (Netherlands)

    Schutten, Johannes M.J.; van de Velde, S.L.; van de Velde, S.L.; Zijm, Willem H.M.

    1996-01-01

    We address the NP-hard problem of scheduling n independent jobs with release dates, due dates, and family setup times on a single machine to minimize the maximum lateness. This problem arises from the constant tug-of-war going on in manufacturing between efficient production and delivery

  7. AP1000 construction schedule

    International Nuclear Information System (INIS)

    Winters, J.W.

    2001-01-01

    Westinghouse performed this study as part of EPRI interest in advancing the use of computer aided processes to reduce the cost of nuclear power plants. EPRI believed that if one could relate appropriate portions of an advanced light water reactor plant model to activities in its construction sequence, and this relationship could be portrayed visually, then optimization of the construction sequence could be developed as never before. By seeing a 3-D representation of the plant at any point in its construction sequence, more informed decisions can be made on the feasibility or attractiveness of follow on or parallel steps in the sequence. The 3-D representation of construction as a function of time (4-D) could also increase the confidence of potential investors concerning the viability of the schedule and the plant ultimate cost. This study performed by Westinghouse confirmed that it is useful to be able to visualize a plant construction in 3-D as a function of time in order to optimize the sequence of construction activities. (author)

  8. Schedule Matters: Understanding the Relationship between Schedule Delays and Costs on Overruns

    Science.gov (United States)

    Majerowicz, Walt; Shinn, Stephen A.

    2016-01-01

    This paper examines the relationship between schedule delays and cost overruns on complex projects. It is generally accepted by many project practitioners that cost overruns are directly related to schedule delays. But what does "directly related to" actually mean? Some reasons or root causes for schedule delays and associated cost overruns are obvious, if only in hindsight. For example, unrealistic estimates, supply chain difficulties, insufficient schedule margin, technical problems, scope changes, or the occurrence of risk events can negatively impact schedule performance. Other factors driving schedule delays and cost overruns may be less obvious and more difficult to quantify. Examples of these less obvious factors include project complexity, flawed estimating assumptions, over-optimism, political factors, "black swan" events, or even poor leadership and communication. Indeed, is it even possible the schedule itself could be a source of delay and subsequent cost overrun? Through literature review, surveys of project practitioners, and the authors' own experience on NASA programs and projects, the authors will categorize and examine the various factors affecting the relationship between project schedule delays and cost growth. The authors will also propose some ideas for organizations to consider to help create an awareness of the factors which could cause or influence schedule delays and associated cost growth on complex projects.

  9. TECHNICAL COORDINATION, SCHEDULE AND INTEGRATION

    CERN Multimedia

    A. Ball

    Introduction Despite the holiday season affecting available manpower, many key internal milestones have been passed over the summer, thanks to the dedication and commitment of the team at point 5. In particular, the installation on, and within, YB0 has progressed steadily through several potentially difficult phases. The v36 planning contingency of lowering YB-1 and YB-1 wheels on schedule in October, before Tracker installation, will be executed in order to give more time to complete YB0 services work, whilst still being consistent with completion of heavy lowering by the end of 2007. Safety In the underground areas the peak level of activity and parallel work has been reached and this will continue for the coming months. Utmost vigilance is required of everybody working underground and this must be maintained. However, it is encouraging to note that the compliance with safety rules is, in general, good. More and more work will be carried out from scaffolding and mobile access platforms. (cherry-picke...

  10. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  11. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  12. High accurate volume holographic correlator with 4000 parallel correlation channels

    Science.gov (United States)

    Ni, Kai; Qu, Zongyao; Cao, Liangcai; Su, Ping; He, Qingsheng; Jin, Guofan

    2008-03-01

    Volume holographic correlator allows simultaneously calculate the two-dimensional inner product between the input image and each stored image. We have recently experimentally implemented in VHC 4000 parallel correlation channels with better than 98% output accuracy in a single location in a crystal. The speckle modulation is used to suppress the sidelobes of the correlation patterns, allowing more correlation spots to be contained in the output plane. A modified exposure schedule is designed to ensure the hologram in each channel with unity diffraction efficiency. In this schedule, a restricted coefficient was introduced into the original exposure schedule to solve the problem that the sensitivity and time constant of the crystal will change as a time function when in high-capacity storage. An interleaving method is proposed to improve the output accuracy. By unifying the distribution of the input and stored image patterns without changing the inner products between them, this method could eliminate the impact of correlation pattern variety on calculated inner product values. Moreover, by using this method, the maximum correlation spot size is reduced, which decreases the required minimum safe clearance between neighboring spots in the output plane, allowing more spots to be parallely detected without crosstalk. The experimental results are given and analyzed.

  13. Monte Carlo calculations on a parallel computer using MORSE-C.G

    International Nuclear Information System (INIS)

    Wood, J.

    1995-01-01

    The general purpose particle transport Monte Carlo code, MORSE-C.G., is implemented on a parallel computing transputer-based system having MIMD architecture. Example problems are solved which are representative of the 3-principal types of problem that can be solved by the original serial code, namely, fixed source, eigenvalue (k-eff) and time-dependent. The results from the parallelized version of the code are compared in tables with the serial code run on a mainframe serial computer, and with an independent, deterministic transport code. The performance of the parallel computer as the number of processors is varied is shown graphically. For the parallel strategy used, the loss of efficiency as the number of processors is increased, is investigated. (author)

  14. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    Science.gov (United States)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  15. Optimization of Task Scheduling Algorithm through QoS Parameters for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Monika

    2016-01-01

    Full Text Available Cloud computing is an incipient innovation which broadly spreads among analysts. It furnishes clients with foundation, stage and programming as enhancement which is easily available by means of web. A cloud is a sort of parallel and conveyed framework comprising of a gathering of virtualized PCs that are utilized to execute various tasks to accomplish good execution time, accomplish due date and usage of its assets. The scheduling issue can be seen as the finding an ideal task of assignments over the accessible arrangement of assets with the goal that we can accomplish the wanted objectives for tasks. This paper presents an optimal algorithm for scheduling tasks to get their waiting time as a QoS parameter. The algorithm is simulated using Cloudsim simulator and experiments are carried out to help clients to make sense of the bottleneck of utilizing no. of virtual machine parallely.

  16. Analyzing Integrated Cost-Schedule Risk for Complex Product Systems R&D Projects

    Directory of Open Access Journals (Sweden)

    Zhe Xu

    2014-01-01

    Full Text Available The vast majority of the research efforts in project risk management tend to assess cost risk and schedule risk independently. However, project cost and time are related in reality and the relationship between them should be analyzed directly. We propose an integrated cost and schedule risk assessment model for complex product systems R&D projects. Graphical evaluation review technique (GERT, Monte Carlo simulation, and probability distribution theory are utilized to establish the model. In addition, statistical analysis and regression analysis techniques are employed to analyze simulation outputs. Finally, a complex product systems R&D project as an example is modeled by the proposed approach and the simulation outputs are analyzed to illustrate the effectiveness of the risk assessment model. It seems that integrating cost and schedule risk assessment can provide more reliable risk estimation results.

  17. Performance Evaluation of Bidding-Based Multi-Agent Scheduling Algorithms for Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Antonio Gordillo

    2014-10-01

    Full Text Available Artificial Intelligence techniques have being applied to many problems in manufacturing systems in recent years. In the specific field of manufacturing scheduling many studies have been published trying to cope with the complexity of the manufacturing environment. One of the most utilized approaches is (multi agent-based scheduling. Nevertheless, despite the large list of studies reported in this field, there is no resource or scientific study on the performance measure of this type of approach under very common and critical execution situations. This paper focuses on multi-agent systems (MAS based algorithms for task allocation, particularly in manufacturing applications. The goal is to provide a mechanism to measure the performance of agent-based scheduling approaches for manufacturing systems under key critical situations such as: dynamic environment, rescheduling, and priority change. With this mechanism it will be possible to simulate critical situations and to stress the system in order to measure the performance of a given agent-based scheduling method. The proposed mechanism is a pioneering approach for performance evaluation of bidding-based MAS approaches for manufacturing scheduling. The proposed method and evaluation methodology can be used to run tests in different manufacturing floors since it is independent of the workshop configuration. Moreover, the evaluation results presented in this paper show the key factors and scenarios that most affect the market-like MAS approaches for manufacturing scheduling.

  18. Single-machine scheduling with release dates, due dates, and family setup times

    NARCIS (Netherlands)

    J.M.J. Schutten (Marco); S.L. van de Velde (Steef); W.H.M. Zijm

    1996-01-01

    textabstractWe address the NP-hard problem of scheduling n independent jobs with release dates, due dates, and family setup times on a single machine to minimize the maximum lateness. This problem arises from the constant tug-of-war going on in manufacturing between efficient production and delivery

  19. A Statistical Model for Uplink Intercell Interference with Power Adaptation and Greedy Scheduling

    KAUST Repository

    Tabassum, Hina

    2012-10-03

    This paper deals with the statistical modeling of uplink inter-cell interference (ICI) considering greedy scheduling with power adaptation based on channel conditions. The derived model is implicitly generalized for any kind of shadowing and fading environments. More precisely, we develop a generic model for the distribution of ICI based on the locations of the allocated users and their transmit powers. The derived model is utilized to evaluate important network performance metrics such as ergodic capacity, average fairness and average power preservation numerically. Monte-Carlo simulation details are included to support the analysis and show the accuracy of the derived expressions. In parallel to the literature, we show that greedy scheduling with power adaptation reduces the ICI, average power consumption of users, and enhances the average fairness among users, compared to the case without power adaptation. © 2012 IEEE.

  20. A Statistical Model for Uplink Intercell Interference with Power Adaptation and Greedy Scheduling

    KAUST Repository

    Tabassum, Hina; Yilmaz, Ferkan; Dawy, Zaher; Alouini, Mohamed-Slim

    2012-01-01

    This paper deals with the statistical modeling of uplink inter-cell interference (ICI) considering greedy scheduling with power adaptation based on channel conditions. The derived model is implicitly generalized for any kind of shadowing and fading environments. More precisely, we develop a generic model for the distribution of ICI based on the locations of the allocated users and their transmit powers. The derived model is utilized to evaluate important network performance metrics such as ergodic capacity, average fairness and average power preservation numerically. Monte-Carlo simulation details are included to support the analysis and show the accuracy of the derived expressions. In parallel to the literature, we show that greedy scheduling with power adaptation reduces the ICI, average power consumption of users, and enhances the average fairness among users, compared to the case without power adaptation. © 2012 IEEE.

  1. Genetic algorithm with small population size for search feasible control parameters for parallel hybrid electric vehicles

    Directory of Open Access Journals (Sweden)

    Yu-Huei Cheng

    2017-11-01

    Full Text Available The control strategy is a major unit in hybrid electric vehicles (HEVs. In order to provide suitable control parameters for reducing fuel consumptions and engine emissions while maintaining vehicle performance requirements, the genetic algorithm (GA with small population size is applied to search for feasible control parameters in parallel HEVs. The electric assist control strategy (EACS is used as the fundamental control strategy of parallel HEVs. The dynamic performance requirements stipulated in the Partnership for a New Generation of Vehicles (PNGV is considered to maintain the vehicle performance. The known ADvanced VehIcle SimulatOR (ADVISOR is used to simulate a specific parallel HEV with urban dynamometer driving schedule (UDDS. Five population sets with size 5, 10, 15, 20, and 25 are used in the GA. The experimental results show that the GA with population size of 25 is the best for selecting feasible control parameters in parallel HEVs.

  2. 29 CFR 825.203 - Scheduling of intermittent or reduced schedule leave.

    Science.gov (United States)

    2010-07-01

    ... leave intermittently or on a reduced leave schedule for planned medical treatment, then the employee... 29 Labor 3 2010-07-01 2010-07-01 false Scheduling of intermittent or reduced schedule leave. 825... OF LABOR OTHER LAWS THE FAMILY AND MEDICAL LEAVE ACT OF 1993 Employee Leave Entitlements Under the...

  3. Combinations of response-reinforcer relations in periodic and aperiodic schedules.

    Science.gov (United States)

    Kuroda, Toshikazu; Cançado, Carlos R X; Lattal, Kennon A; Elcoro, Mirari; Dickson, Chata A; Cook, James E

    2013-03-01

    Key pecking of 4 pigeons was studied under a two-component multiple schedule in which food deliveries were arranged according to a fixed and a variable interfood interval. The percentage of response-dependent food in each component was varied, first in ascending (0, 10, 30, 70 and 100%) and then in descending orders, in successive conditions. The change in response rates was positively related to the percentage of response-dependent food in each schedule component. Across conditions, positively accelerated and linear patterns of responding occurred consistently in the fixed and variable components, respectively. These results suggest that the response-food dependency determines response rates in periodic and aperiodic schedules, and that the temporal distribution of food determines response patterns independently of the response-food dependency. Running rates, but not postfood pauses, also were positively related to the percentage of dependent food in each condition, in both fixed and variable components. Thus, the relation between overall response rate and the percentage of dependent food was mediated by responding that occurred after postfood pausing. The findings together extend previous studies wherein the dependency was either always present or absent, and increase the generality of the effects of variations in the response-food dependency from aperiodic to periodic schedules. © Society for the Experimental Analysis of Behavior.

  4. ATLAS construction schedule

    CERN Multimedia

    Kotamaki, M

    The goal during the last few months has been to freeze and baseline as much as possible the schedules of various ATLAS systems and activities. The main motivations for the re-baselining of the schedules have been the new LHC schedule aiming at first collisions in early 2006 and the encountered delays in civil engineering as well as in the production of some of the detectors. The process was started by first preparing a new installation schedule that takes into account all the new external constraints and the new ATLAS staging scenario. The installation schedule version 3 was approved in the March EB and it provides the Ready For Installation (RFI) milestones for each system, i.e. the date when the system should be available for the start of the installation. TCn is now interacting with the systems aiming at a more realistic and resource loaded version 4 before the end of the year. Using the new RFI milestones as driving dates a new summary schedule has been prepared, or is under preparation, for each system....

  5. Development and use of schedules in education of elementary school children with ASD

    Directory of Open Access Journals (Sweden)

    Sharova Y.A.

    2015-09-01

    Full Text Available The work on preparedness for education of elementary school students with autism disorders can be greatly facilitated by the use of the methods that allow to structure child's knowledge about necessary changes. The use of schedules greatly facilitates the process of education, guiding and work on children's adaptation. The article describes stages of the work on inclusion and use of general and individual schedules in two groups of children with intellectual disabilities and autism spectrum disorders in preschool classes. This work was conducted in the Center for Psychological, Medical and Social Support to Children and Adolescents of the Moscow State University of Psychology and Education. The article contains examples of the use schedules to increase independence and to reduce anxiety in children with autism spectrum disorders.

  6. Relative performance of priority rules for hybrid flow shop scheduling with setup times

    Directory of Open Access Journals (Sweden)

    Helio Yochihiro Fuchigami

    2015-12-01

    Full Text Available This paper focuses the hybrid flow shop scheduling problem with explicit and sequence-independent setup times. This production environment is a multistage system with unidirectional flow of jobs, wherein each stage may contain multiple machines available for processing. The optimized measure was the total time to complete the schedule (makespan. The aim was to propose new priority rules to support the schedule and to evaluate their relative performance at the production system considered by the percentage of success, relative deviation, standard deviation of relative deviation, and average CPU time. Computational experiments have indicated that the rules using ascending order of the sum of processing and setup times of the first stage (SPT1 and SPT1_ERD performed better, reaching together more than 56% of success.

  7. Responsive versus scheduled feeding for preterm infants.

    Science.gov (United States)

    Watson, Julie; McGuire, William

    2015-10-13

    Feeding preterm infants in response to their hunger and satiation cues (responsive, cue-based, or infant-led feeding) rather than at scheduled intervals might enhance infants' and parents' experience and satisfaction, help in the establishment of independent oral feeding, increase nutrient intake and growth rates, and allow earlier hospital discharge. To assess the effect of feeding preterm infants on a responsive basis versus feeding prescribed volumes at scheduled intervals on growth, duration of hospital stay, and parental satisfaction. We used the standard search strategy of the Cochrane Neonatal Review Group. This included searches of the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 9, 2015), MEDLINE (1966 to September 2015), EMBASE (1980 to September 2015), and CINAHL (1982 to September 2015), conference proceedings, previous reviews, and trial registries. Randomised controlled trials (RCTs) or quasi-RCTs that compared a policy of feeding preterm infants on a responsive basis versus feeding at scheduled intervals. Two review authors assessed trial eligibility and risk of bias and undertook data extraction independently. We analysed the treatment effects in the individual trials and reported the risk ratio and risk difference for dichotomous data and mean difference (MD) for continuous data, with respective 95% confidence intervals (CIs). We used a fixed-effect model in meta-analyses and explored the potential causes of heterogeneity in sensitivity analyses. We found nine eligible RCTs including 593 infants in total. These trials compared responsive with scheduled interval regimens in preterm infants in the transition phase from intragastric tube to oral feeding. The trials were generally small and contained various methodological weaknesses including lack of blinding and incomplete assessment of all randomised participants. Meta-analyses, although limited by data quality and availability, suggest that responsive feeding

  8. Scheduling for dual-hop block-fading channels with two source-user pairs sharing one relay

    KAUST Repository

    Zafar, Ammar

    2013-09-01

    In this paper, we maximize the achievable rate region of a dual-hop network with two sources serving two users independently through a single shared relay. We formulate the problem as maximizing the sum of the weighted long term average throughputs of the two users under stability constraints on the long term throughputs of the source-user pairs. In order to solve the problem, we propose a joint user-and-hop scheduling scheme, which schedules the first or second hop opportunistically based on instantaneous channel state information, in order to exploit multiuser diversity and multihop diversity gains. Numerical results show that the proposed joint scheduling scheme enhances the achievable rate region as compared to a scheme that employs multi-user scheduling on the second-hop alone. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc.

  9. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    Science.gov (United States)

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  10. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    Science.gov (United States)

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  11. From Preemptive to Non-preemptive Scheduling Using Rejections

    OpenAIRE

    Lucarelli , Giorgio; Srivastav , Abhinav; Trystram , Denis

    2016-01-01

    International audience; We study the classical problem of scheduling a set of independent jobs with release dates on a single machine. There exists a huge literature on the preemptive version of the problem, where the jobs can be interrupted at any moment. However, we focus here on the non-preemptive case, which is harder, but more relevant in practice. For instance, the jobs submitted to actual high performance platforms cannot be interrupted or migrated once they start their execution (due ...

  12. Heuristic Method for Decision-Making in Common Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Edyta Kucharska

    2017-10-01

    Full Text Available The aim of the paper is to present a heuristic method for decision-making regarding an NP-hard scheduling problem with limitations related to tasks and the resources dependent on the current state of the process. The presented approach is based on the algebraic-logical meta-model (ALMM, which enables making collective decisions in successive process stages, not separately for individual objects or executors. Moreover, taking into account the limitations of the problem, it involves constructing only an acceptable solution and significantly reduces the amount of calculations. A general algorithm based on the presented method is composed of the following elements: preliminary analysis of the problem, techniques for the choice of decision at a given state, the pruning non-perspective trajectory, selection technique of the initial state for the trajectory final part, and the trajectory generation parameters modification. The paper includes applications of the presented approach to scheduling problems on unrelated parallel machines with a deadline and machine setup time dependent on the process state, where the relationship between tasks is defined by the graph. The article also presents the results of computational experiments.

  13. A note on resource allocation scheduling with group technology and learning effects on a single machine

    Science.gov (United States)

    Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu

    2017-09-01

    In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.

  14. A parallel algorithm for the non-symmetric eigenvalue problem

    International Nuclear Information System (INIS)

    Sidani, M.M.

    1991-01-01

    An algorithm is presented for the solution of the non-symmetric eigenvalue problem. The algorithm is based on a divide-and-conquer procedure that provides initial approximations to the eigenpairs, which are then refined using Newton iterations. Since the smaller subproblems can be solved independently, and since Newton iterations with different initial guesses can be started simultaneously, the algorithm - unlike the standard QR method - is ideal for parallel computers. The author also reports on his investigation of deflation methods designed to obtain further eigenpairs if needed. Numerical results from implementations on a host of parallel machines (distributed and shared-memory) are presented

  15. Constraint-based scheduling applying constraint programming to scheduling problems

    CERN Document Server

    Baptiste, Philippe; Nuijten, Wim

    2001-01-01

    Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...

  16. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  17. Scheduling for decommissioning projects

    International Nuclear Information System (INIS)

    Podmajersky, O.E.

    1987-01-01

    This paper describes the Project Scheduling system being employed by the Decommissioning Operations Contractor at the Shippingport Station Decommissioning Project (SSDP). Results from the planning system show that the project continues to achieve its cost and schedule goals. An integrated cost and schedule control system (C/SCS) which uses the concept of earned value for measurement of performance was instituted in accordance with DOE orders. The schedule and cost variances generated by the C/SCS system are used to confirm management's assessment of project status. This paper describes the types of schedules and tools used on the SSDP project to plan and monitor the work, and identifies factors that are unique to a decommissioning project that make scheduling critical to the achievement of the project's goals. 1 fig

  18. Program reference schedule baseline

    International Nuclear Information System (INIS)

    1986-07-01

    This Program Reference Schedule Baseline (PRSB) provides the baseline Program-level milestones and associated schedules for the Civilian Radioactive Waste Management Program. It integrates all Program-level schedule-related activities. This schedule baseline will be used by the Director, Office of Civilian Radioactive Waste Management (OCRWM), and his staff to monitor compliance with Program objectives. Chapter 1 includes brief discussions concerning the relationship of the PRSB to the Program Reference Cost Baseline (PRCB), the Mission Plan, the Project Decision Schedule, the Total System Life Cycle Cost report, the Program Management Information System report, the Program Milestone Review, annual budget preparation, and system element plans. Chapter 2 includes the identification of all Level 0, or Program-level, milestones, while Chapter 3 presents and discusses the critical path schedules that correspond to those Level 0 milestones

  19. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    Science.gov (United States)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  20. Second-order schedules of token reinforcement with pigeons: effects of fixed- and variable-ratio exchange schedules.

    Science.gov (United States)

    Foster, T A; Hackenberg, T D; Vaidya, M

    2001-09-01

    Pigeons' key pecks produced food under second-order schedules of token reinforcement, with light-emitting diodes serving as token reinforcers. In Experiment 1, tokens were earned according to a fixed-ratio 50 schedule and were exchanged for food according to either fixed-ratio or variable-ratio exchange schedules, with schedule type varied across conditions. In Experiment 2, schedule type was varied within sessions using a multiple schedule. In one component, tokens were earned according to a fixed-ratio 50 schedule and exchanged according to a variable-ratio schedule. In the other component, tokens were earned according to a variable-ratio 50 schedule and exchanged according to a fixed-ratio schedule. In both experiments, the number of responses per exchange was varied parametrically across conditions, ranging from 50 to 400 responses. Response rates decreased systematically with increases in the fixed-ratio exchange schedules, but were much less affected by changes in the variable-ratio exchange schedules. Response rates were consistently higher under variable-ratio exchange schedules than tinder comparable fixed-ratio exchange schedules, especially at higher exchange ratios. These response-rate differences were due both to greater pre-ratio pausing and to lower local rates tinder the fixed-ratio exchange schedules. Local response rates increased with proximity to food under the higher fixed-ratio exchange schedules, indicative of discriminative control by the tokens.

  1. Constraint-based scheduling

    Science.gov (United States)

    Zweben, Monte

    1993-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  2. Perceptions of randomized security schedules.

    Science.gov (United States)

    Scurich, Nicholas; John, Richard S

    2014-04-01

    Security of infrastructure is a major concern. Traditional security schedules are unable to provide omnipresent coverage; consequently, adversaries can exploit predictable vulnerabilities to their advantage. Randomized security schedules, which randomly deploy security measures, overcome these limitations, but public perceptions of such schedules have not been examined. In this experiment, participants were asked to make a choice between attending a venue that employed a traditional (i.e., search everyone) or a random (i.e., a probability of being searched) security schedule. The absolute probability of detecting contraband was manipulated (i.e., 1/10, 1/4, 1/2) but equivalent between the two schedule types. In general, participants were indifferent to either security schedule, regardless of the probability of detection. The randomized schedule was deemed more convenient, but the traditional schedule was considered fairer and safer. There were no differences between traditional and random schedule in terms of perceived effectiveness or deterrence. Policy implications for the implementation and utilization of randomized schedules are discussed. © 2013 Society for Risk Analysis.

  3. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  4. On-line scheduling of two-machine open shops where jobs arrive over time

    NARCIS (Netherlands)

    Chen, B.; Vestjens, A.P.A.; Woeginger, G.J.

    1998-01-01

    We investigate the problem of on-line scheduling two-machine open shops with the objective of minimizing the makespan.Jobs arrive independently over time, and the existence of a job is not known until its arrival. In the clairvoyant on-line model, the processing requirement of every job becomes

  5. Amphetamine increases schedule-induced drinking reduced by negative punishment procedures.

    Science.gov (United States)

    Pérez-Padilla, Angeles; Pellón, Ricardo

    2003-05-01

    d-Amphetamine has been reported to increase schedule-induced drinking punished by lick-dependent signalled delays in food delivery. This might reflect a drug-behaviour interaction dependent on the type of punisher, because no such effect has been found when drinking was reduced by lick-contingent electric shocks. However, the anti-punishment effect of amphetamine could be mediated by other behavioural processes, such as a loss of discriminative control or an increase in the value of delayed reinforcers. To test the effects of d-amphetamine on the acquisition and maintenance of schedule-induced drinking reduced by unsignalled delays in food delivery. Rats received 10-s unsignalled delays initiated by each lick after polydipsia was induced by a fixed-time 30-s food reinforcement schedule or from the outset of the experiment. Yoked-control rats received these same delays but independently of their own behaviour. d-Amphetamine (0.1-3.0 mg/kg) was then tested IP. d-Amphetamine dose-dependently increased and then decreased punished schedule-induced drinking. The drug led to dose-dependent reductions when the delays were not contingent or when they were applied from the outset of training. These results support the contention that d-amphetamine has an increasing effect on schedule-induced drinking that has been previously reduced by a negative punishment procedure. This effect cannot be attributed to other potentially involved processes, and therefore support the idea that drug effects on punished behaviour depend on punishment being delays in food or shock deliveries.

  6. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  7. How Home Health Nurses Plan Their Work Schedules: A Qualitative Descriptive Study.

    Science.gov (United States)

    Irani, Elliane; Hirschman, Karen B; Cacchione, Pamela Z; Bowles, Kathryn H

    2018-06-12

    To describe how home health nurses plan their daily work schedules and what challenges they face during the planning process. Home health nurses are viewed as independent providers and value the nature of their work because of the flexibility and autonomy they hold in developing their work schedules. However, there is limited empirical evidence about how home health nurses plan their work schedules, including the factors they consider during the process and the challenges they face within the dynamic home health setting. Qualitative descriptive design. Semi-structured interviews were conducted with 20 registered nurses who had greater than 2 years of experience in home health and were employed by one of the three participating home health agencies in the mid-Atlantic region of the United States. Data were analyzed using conventional content analysis. Four themes emerged about planning work schedules and daily itineraries: identifying patient needs to prioritize visits accordingly, partnering with patients to accommodate their preferences, coordinating visit timing with other providers to avoid overwhelming patients, and working within agency standards to meet productivity requirements. Scheduling challenges included readjusting the schedule based on patient needs and staffing availability, anticipating longer visits, and maintaining continuity of care with patients. Home health nurses make autonomous decisions regarding their work schedules while considering specific patient and agency factors, and overcome challenges related to the unpredictable nature of providing care in a home health setting. Future research is needed to further explore nurse productivity in home health and improve home health work environments. Home health nurses plan their work schedules to provide high quality care that is patient-centered and timely. The findings also highlight organizational priorities to facilitate continuity of care and support nurses while alleviating the burnout

  8. How should periods without social interaction be scheduled? Children's preference for practical schedules of positive reinforcement.

    Science.gov (United States)

    Luczynski, Kevin C; Hanley, Gregory P

    2014-01-01

    Several studies have shown that children prefer contingent reinforcement (CR) rather than yoked noncontingent reinforcement (NCR) when continuous reinforcement is programmed in the CR schedule. Preference has not, however, been evaluated for practical schedules that involve CR. In Study 1, we assessed 5 children's preference for obtaining social interaction via a multiple schedule (periods of fixed-ratio 1 reinforcement alternating with periods of extinction), a briefly signaled delayed reinforcement schedule, and an NCR schedule. The multiple schedule promoted the most efficient level of responding. In general, children chose to experience the multiple schedule and avoided the delay and NCR schedules, indicating that they preferred multiple schedules as the means to arrange practical schedules of social interaction. In Study 2, we evaluated potential controlling variables that influenced 1 child's preference for the multiple schedule and found that the strong positive contingency was the primary variable. © Society for the Experimental Analysis of Behavior.

  9. Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns

    Directory of Open Access Journals (Sweden)

    Jinwei Gu

    2014-01-01

    Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.

  10. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  11. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  12. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    International Nuclear Information System (INIS)

    Apostolakis, J; Brun, R; Carminati, F; Gheata, A; Novak, M; Wenzel, S; Bandieramonte, M; Bitzes, G; Canal, P; Elvira, V D; Jun, S Y; Lima, G; Licht, J C De Fine; Duhem, L; Sehgal, R; Shadura, O

    2015-01-01

    The GeantV project is focused on the R and D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The model has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. This work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results. (paper)

  13. The comparison of predictive scheduling algorithms for different sizes of job shop scheduling problems

    Science.gov (United States)

    Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.

    2016-08-01

    In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.

  14. Scheduling with Time Lags

    NARCIS (Netherlands)

    X. Zhang (Xiandong)

    2010-01-01

    textabstractScheduling is essential when activities need to be allocated to scarce resources over time. Motivated by the problem of scheduling barges along container terminals in the Port of Rotterdam, this thesis designs and analyzes algorithms for various on-line and off-line scheduling problems

  15. TECHNICAL COORDINATION, SCHEDULE AND INTEGRATION

    CERN Multimedia

    A. Ball

    Introduction and Schedule After nearly seven months of concentrated effort, the installation of services on YB0 moved off the CMS critical path in late November. In line with v36 planning provisions, the additional time needed to finish this challeng¬ing task was accommodated by reducing sequential dependencies between assembly tasks, putting more tasks (especially heavy logistic movements) in parallel with activities on, or within, the central wheel. Thus the lowering of wheels YB-1 and YB -2 and of disk YE-3 is already complete, the latter made possible, in the shadow of YB0 work, by inverting the order of the 3 endcap disks in the surface building. Weather conditions permitting, the Tracker will be transported to point 5 during CMS week for insertion in EB before CERN closes. The lowering of the last two disks will take place mid- and end-of January, respectively. Thus central beampipe installation can be confidently planned to start in February as foreseen, allowing closure of CMS in time for CRA...

  16. Automated Scheduling Via Artificial Intelligence

    Science.gov (United States)

    Biefeld, Eric W.; Cooper, Lynne P.

    1991-01-01

    Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.

  17. Extensions to the Parallel Real-Time Artificial Intelligence System (PRAIS) for fault-tolerant heterogeneous cycle-stealing reasoning

    Science.gov (United States)

    Goldstein, David

    1991-01-01

    Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.

  18. Schedule Analytics

    Science.gov (United States)

    2016-04-30

    Warfare, Naval Sea Systems Command Acquisition Cycle Time : Defining the Problem David Tate, Institute for Defense Analyses Schedule Analytics Jennifer...research was comprised of the following high- level steps :  Identify and review primary data sources 1...research. However, detailed reviews of the OMB IT Dashboard data revealed that schedule data is highly aggregated. Program start date and program end date

  19. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  20. Hybrid Scheduling/Signal-Level Coordination in the Downlink of Multi-Cloud Radio-Access Networks

    KAUST Repository

    Douik, Ahmed

    2016-03-28

    In the context of resource allocation in cloud- radio access networks, recent studies assume either signal-level or scheduling-level coordination. This paper, instead, considers a hybrid level of coordination for the scheduling problem in the downlink of a multi-cloud radio- access network, so as to benefit from both scheduling policies. Consider a multi-cloud radio access network, where each cloud is connected to several base-stations (BSs) via high capacity links, and therefore allows joint signal processing between them. Across the multiple clouds, however, only scheduling-level coordination is permitted, as it requires a lower level of backhaul communication. The frame structure of every BS is composed of various time/frequency blocks, called power- zones (PZs), and kept at fixed power level. The paper addresses the problem of maximizing a network-wide utility by associating users to clouds and scheduling them to the PZs, under the practical constraints that each user is scheduled, at most, to a single cloud, but possibly to many BSs within the cloud, and can be served by one or more distinct PZs within the BSs\\' frame. The paper solves the problem using graph theory techniques by constructing the conflict graph. The scheduling problem is, then, shown to be equivalent to a maximum- weight independent set problem in the constructed graph, in which each vertex symbolizes an association of cloud, user, BS and PZ, with a weight representing the utility of that association. Simulation results suggest that the proposed hybrid scheduling strategy provides appreciable gain as compared to the scheduling-level coordinated networks, with a negligible degradation to signal-level coordination.

  1. multiPDEVS: A Parallel Multicomponent System Specification Formalism

    Directory of Open Access Journals (Sweden)

    Damien Foures

    2018-01-01

    Full Text Available Based on multiDEVS formalism, we introduce multiPDEVS, a parallel and nonmodular formalism for discrete event system specification. This formalism provides combined advantages of PDEVS and multiDEVS approaches, such as excellent simulation capabilities for simultaneously scheduled events and components able to influence each other using exclusively their state transitions. We next show the soundness of the formalism by giving a construction showing that any multiPDEVS model is equivalent to a PDEVS atomic model. We then present the simulation procedure associated, usually called abstract simulator. As a well-adapted formalism to express cellular automata, we finally propose to compare an implementation of multiPDEVS formalism with a more classical Cell-DEVS implementation through a fire spread application.

  2. Parallel PDE-Based Simulations Using the Common Component Architecture

    International Nuclear Information System (INIS)

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-01-01

    The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications

  3. A canned food scheduling problem with batch due date

    Science.gov (United States)

    Chung, Tsui-Ping; Liao, Ching-Jong; Smith, Milton

    2014-09-01

    This article considers a canned food scheduling problem where jobs are grouped into several batches. Jobs can be sent to the next operation only when all the jobs in the same batch have finished their processing, i.e. jobs in a batch, have a common due date. This batch due date problem is quite common in canned food factories, but there is no efficient heuristic to solve the problem. The problem can be formulated as an identical parallel machine problem with batch due date to minimize the total tardiness. Since the problem is NP hard, two heuristics are proposed to find the near-optimal solution. Computational results comparing the effectiveness and efficiency of the two proposed heuristics with an existing heuristic are reported and discussed.

  4. Designing cyclic appointment schedules for outpatient clinics with scheduled and unscheduled patient arrivals

    NARCIS (Netherlands)

    Kortbeek, Nikky; Zonderland, Maartje E.; Braaksma, Aleida; Vliegen, Ingrid M. H.; Boucherie, Richard J.; Litvak, Nelly; Hans, Erwin W.

    2014-01-01

    We present a methodology to design appointment systems for outpatient clinics and diagnostic facilities that offer both walk-in and scheduled service. The developed blueprint for the appointment schedule prescribes the number of appointments to plan per day and the moment on the day to schedule the

  5. Designing cyclic appointment schedules for outpatient clinics with scheduled and unscheduled patient arrivals

    NARCIS (Netherlands)

    Kortbeek, Nikky; Zonderland, Maartje Elisabeth; Boucherie, Richardus J.; Litvak, Nelli; Hans, Elias W.

    2011-01-01

    We present a methodology to design appointment systems for outpatient clinics and diagnostic facilities that offer both walk-in and scheduled service. The developed blueprint for the appointment schedule prescribes the number of appointments to plan per day and the moment on the day to schedule the

  6. Proceedings of the workshop on Compilation of (Symbolic) Languages for Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tick, E. (comp.)

    1991-11-01

    This report comprises the abstracts and papers for the talks presented at the Workshop on Compilation of (Symbolic) Languages for Parallel Computers, held October 31--November 1, 1991, in San Diego. These unreferred contributions were provided by the participants for the purpose of this workshop; many of them will be published elsewhere in peer-reviewed conferences and publications. Our goal is planning this workshop was to bring together researchers from different disciplines with common problems in compilation. In particular, we wished to encourage interaction between researchers working in compilation of symbolic languages and those working on compilation of conventional, imperative languages. The fundamental problems facing researchers interested in compilation of logic, functional, and procedural programming languages for parallel computers are essentially the same. However, differences in the basic programming paradigms have led to different communities emphasizing different species of the parallel compilation problem. For example, parallel logic and functional languages provide dataflow-like formalisms in which control dependencies are unimportant. Hence, a major focus of research in compilation has been on techniques that try to infer when sequential control flow can safely be imposed. Granularity analysis for scheduling is a related problem. The single- assignment property leads to a need for analysis of memory use in order to detect opportunities for reuse. Much of the work in each of these areas relies on the use of abstract interpretation techniques.

  7. NRC comprehensive records disposition schedule

    International Nuclear Information System (INIS)

    1992-03-01

    Title 44 United States Code, ''Public Printing and Documents,'' regulations cited in the General Services Administration's (GSA) ''Federal Information Resources Management Regulations'' (FIRMR), Part 201-9, ''Creation, Maintenance, and Use of Records,'' and regulation issued by the National Archives and Records Administration (NARA) in 36 CFR Chapter XII, Subchapter B, ''Records Management,'' require each agency to prepare and issue a comprehensive records disposition schedule that contains the NARA approved records disposition schedules for records unique to the agency and contains the NARA's General Records Schedules for records common to several or all agencies. The approved records disposition schedules specify the appropriate duration of retention and the final disposition for records created or maintained by the NRC. NUREG-0910, Rev. 2, contains ''NRC's Comprehensive Records Disposition Schedule,'' and the original authorized approved citation numbers issued by NARA. Rev. 2 totally reorganizes the records schedules from a functional arrangement to an arrangement by the host office. A subject index and a conversion table have also been developed for the NRC schedules to allow staff to identify the new schedule numbers easily and to improve their ability to locate applicable schedules

  8. Integrating Preventive Maintenance Scheduling As Probability Machine Failure And Batch Production Scheduling

    Directory of Open Access Journals (Sweden)

    Zahedi Zahedi

    2016-06-01

    Full Text Available This paper discusses integrated model of batch production scheduling and machine maintenance scheduling. Batch production scheduling uses minimize total actual flow time criteria and machine maintenance scheduling uses the probability of machine failure based on Weibull distribution. The model assumed no nonconforming parts in a planning horizon. The model shows an increase in the number of the batch (length of production run up to a certain limit will minimize the total actual flow time. Meanwhile, an increase in the length of production run will implicate an increase in the number of PM. An example was given to show how the model and algorithm work.

  9. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    Science.gov (United States)

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  10. A novel harmonic current sharing control strategy for parallel-connected inverters

    DEFF Research Database (Denmark)

    Guan, Yajuan; Guerrero, Josep M.; Savaghebi, Mehdi

    2017-01-01

    A novel control strategy which enables proportional linear and nonlinear loads sharing among paralleled inverters and voltage harmonic suppression is proposed in this paper. The proposed method is based on the autonomous currents sharing controller (ACSC) instead of conventional power droop control...... to provide fast transient response, decoupling control and large stability margin. The current components at different sequences and orders are decomposed by a multi-second-order generalized integrator-based frequency locked loop (MSOGI-FLL). A harmonic-orthogonal-virtual-resistances controller (HOVR......) is used to proportionally share current components at different sequences and orders independently among the paralleled inverters. Proportional resonance controllers tuned at selected frequencies are used to suppress voltage harmonics. Simulations based on two 2.2 kW paralleled three-phase inverters...

  11. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  12. NASA Schedule Management Handbook

    Science.gov (United States)

    2011-01-01

    The purpose of schedule management is to provide the framework for time-phasing, resource planning, coordination, and communicating the necessary tasks within a work effort. The intent is to improve schedule management by providing recommended concepts, processes, and techniques used within the Agency and private industry. The intended function of this handbook is two-fold: first, to provide guidance for meeting the scheduling requirements contained in NPR 7120.5, NASA Space Flight Program and Project Management Requirements, NPR 7120.7, NASA Information Technology and Institutional Infrastructure Program and Project Requirements, NPR 7120.8, NASA Research and Technology Program and Project Management Requirements, and NPD 1000.5, Policy for NASA Acquisition. The second function is to describe the schedule management approach and the recommended best practices for carrying out this project control function. With regards to the above project management requirements documents, it should be noted that those space flight projects previously established and approved under the guidance of prior versions of NPR 7120.5 will continue to comply with those requirements until project completion has been achieved. This handbook will be updated as needed, to enhance efficient and effective schedule management across the Agency. It is acknowledged that most, if not all, external organizations participating in NASA programs/projects will have their own internal schedule management documents. Issues that arise from conflicting schedule guidance will be resolved on a case by case basis as contracts and partnering relationships are established. It is also acknowledged and understood that all projects are not the same and may require different levels of schedule visibility, scrutiny and control. Project type, value, and complexity are factors that typically dictate which schedule management practices should be employed.

  13. Profit-based conventional resource scheduling with renewable energy penetration

    Science.gov (United States)

    Reddy, K. Srikanth; Panwar, Lokesh Kumar; Kumar, Rajesh; Panigrahi, B. K.

    2017-08-01

    Technological breakthroughs in renewable energy technologies (RETs) enabled them to attain grid parity thereby making them potential contenders for existing conventional resources. To examine the market participation of RETs, this paper formulates a scheduling problem accommodating energy market participation of wind- and solar-independent power producers (IPPs) treating both conventional and RETs as identical entities. Furthermore, constraints pertaining to penetration and curtailments of RETs are restructured. Additionally, an appropriate objective function for profit incurred by conventional resource IPPs through reserve market participation as a function of renewable energy curtailment is also proposed. The proposed concept is simulated with a test system comprising 10 conventional generation units in conjunction with solar photovoltaic (SPV) and wind energy generators (WEG). The simulation results indicate that renewable energy integration and its curtailment limits influence the market participation or scheduling strategies of conventional resources in both energy and reserve markets. Furthermore, load and reliability parameters are also affected.

  14. Outage scheduling and implementation

    International Nuclear Information System (INIS)

    Allison, J.E.; Segall, P.; Smith, R.R.

    1986-01-01

    Successful preparation and implementation of an outage schedule and completion of scheduled and emergent work within an identified critical path time frame is a result of careful coordination by Operations, Work Control, Maintenance, Engineering, Planning and Administration and others. At the Fast Flux Test Facility (FFTF) careful planning has been responsible for meeting all scheduled outage critical paths

  15. Influence of intravenous self-administered psychomotor stimulants on performance of rhesus monkeys in a multiple schedule paradigm.

    Science.gov (United States)

    Hoffmeister, F

    1980-01-01

    Rhesus monkeys were trained to complete three multiple schedules. The schedules consisted of three components: a fixed interval (component 1), a variable interval (component 2), and a fixed ratio (component 3). During components 1 and 2, pressing lever 1 was always reinforced by food delivery. During component 3, pressing lever 2 resulted in either food delivery or intravenous infusions of saline solution, solutions of cocaine, of d-amphetamine, of phenmetrazine, or fenetylline. In schedule I, animals were presented with all three components independent of key-pressing behavior during components 1 and 2. In schedule II the availability of component 2 was dependent on completion of component 1. Component 3 was made available only on completion of component 2. Noncompletion of components 1 or 2 resulted in time-out of 15 and 10 min, respectively. Schedule III was identical with schedule II, except that in schedule III the completion of components was indicated only by a change in the lever lights. The influence of self-administered drugs on behavior in all three components was evaluated. Self-administration of psychomotor stimulants impaired the performance of animals and delayed completion of components 1 and 2 of schedules I, II, and III. The effects on behavior were similar with low drug intake in schedule III, moderate intake in schedule II, and high drug intake in schedule I. These effects were strong with self-administration of phenmetrazine, moderate with self-administration of cocaine and d-amphetamine, and weak with self-administration of fenetylline.

  16. Project Schedule Simulation

    DEFF Research Database (Denmark)

    Mizouni, Rabeb; Lazarova-Molnar, Sanja

    2015-01-01

    overrun both their budget and time. To improve the quality of initial project plans, we show in this paper the importance of (1) reflecting features’ priorities/risk in task schedules and (2) considering uncertainties related to human factors in plan schedules. To make simulation tasks reflect features......’ priority as well as multimodal team allocation, enhanced project schedules (EPS), where remedial actions scenarios (RAS) are added, were introduced. They reflect potential schedule modifications in case of uncertainties and promote a dynamic sequencing of involved tasks rather than the static conventional...... this document as an instruction set. The electronic file of your paper will be formatted further at Journal of Software. Define all symbols used in the abstract. Do not cite references in the abstract. Do not delete the blank line immediately above the abstract; it sets the footnote at the bottom of this column....

  17. A novel role for Mc1r in the parallel evolution of depigmentation in independent populations of the cavefish Astyanax mexicanus.

    Directory of Open Access Journals (Sweden)

    Joshua B Gross

    2009-01-01

    Full Text Available The evolution of degenerate characteristics remains a poorly understood phenomenon. Only recently has the identification of mutations underlying regressive phenotypes become accessible through the use of genetic analyses. Focusing on the Mexican cave tetra Astyanax mexicanus, we describe, here, an analysis of the brown mutation, which was first described in the literature nearly 40 years ago. This phenotype causes reduced melanin content, decreased melanophore number, and brownish eyes in convergent cave forms of A. mexicanus. Crosses demonstrate non-complementation of the brown phenotype in F(2 individuals derived from two independent cave populations: Pachón and the linked Yerbaniz and Japonés caves, indicating the same locus is responsible for reduced pigmentation in these fish. While the brown mutant phenotype arose prior to the fixation of albinism in Pachón cave individuals, it is unclear whether the brown mutation arose before or after the fixation of albinism in the linked Yerbaniz/Japonés caves. Using a QTL approach combined with sequence and functional analyses, we have discovered that two distinct genetic alterations in the coding sequence of the gene Mc1r cause reduced pigmentation associated with the brown mutant phenotype in these caves. Our analysis identifies a novel role for Mc1r in the evolution of degenerative phenotypes in blind Mexican cavefish. Further, the brown phenotype has arisen independently in geographically separate caves, mediated through different mutations of the same gene. This example of parallelism indicates that certain genes are frequent targets of mutation in the repeated evolution of regressive phenotypes in cave-adapted species.

  18. Integrated parallel reception, excitation, and shimming (iPRES).

    Science.gov (United States)

    Han, Hui; Song, Allen W; Truong, Trong-Kha

    2013-07-01

    To develop a new concept for a hardware platform that enables integrated parallel reception, excitation, and shimming. This concept uses a single coil array rather than separate arrays for parallel excitation/reception and B0 shimming. It relies on a novel design that allows a radiofrequency current (for excitation/reception) and a direct current (for B0 shimming) to coexist independently in the same coil. Proof-of-concept B0 shimming experiments were performed with a two-coil array in a phantom, whereas B0 shimming simulations were performed with a 48-coil array in the human brain. Our experiments show that individually optimized direct currents applied in each coil can reduce the B0 root-mean-square error by 62-81% and minimize distortions in echo-planar images. The simulations show that dynamic shimming with the 48-coil integrated parallel reception, excitation, and shimming array can reduce the B0 root-mean-square error in the prefrontal and temporal regions by 66-79% as compared with static second-order spherical harmonic shimming and by 12-23% as compared with dynamic shimming with a 48-coil conventional shim array. Our results demonstrate the feasibility of the integrated parallel reception, excitation, and shimming concept to perform parallel excitation/reception and B0 shimming with a unified coil system as well as its promise for in vivo applications. Copyright © 2013 Wiley Periodicals, Inc.

  19. State-plane analysis of parallel resonant converter

    Science.gov (United States)

    Oruganti, R.; Lee, F. C.

    1985-01-01

    A method for analyzing the complex operation of a parallel resonant converter is developed, utilizing graphical state-plane techniques. The comprehensive mode analysis uncovers, for the first time, the presence of other complex modes besides the continuous conduction mode and the discontinuous conduction mode and determines their theoretical boundaries. Based on the insight gained from the analysis, a novel, high-frequency resonant buck converter is proposed. The voltage conversion ratio of the new converter is almost independent of load.

  20. Scheduling theory, algorithms, and systems

    CERN Document Server

    Pinedo, Michael L

    2016-01-01

    This new edition of the well-established text Scheduling: Theory, Algorithms, and Systems provides an up-to-date coverage of important theoretical models in the scheduling literature as well as important scheduling problems that appear in the real world. The accompanying website includes supplementary material in the form of slide-shows from industry as well as movies that show actual implementations of scheduling systems. The main structure of the book, as per previous editions, consists of three parts. The first part focuses on deterministic scheduling and the related combinatorial problems. The second part covers probabilistic scheduling models; in this part it is assumed that processing times and other problem data are random and not known in advance. The third part deals with scheduling in practice; it covers heuristics that are popular with practitioners and discusses system design and implementation issues. All three parts of this new edition have been revamped, streamlined, and extended. The reference...

  1. Gain scheduling using the Youla parameterization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    1999-01-01

    Gain scheduling controllers are considered in this paper. The gain scheduling problem where the scheduling parameter vector cannot be measured directly, but needs to be estimated is considered. An estimation of the scheduling vector has been derived by using the Youla parameterization. The use...... in connection with H_inf gain scheduling controllers....

  2. Short-term hydro generation scheduling of Three Gorges–Gezhouba cascaded hydropower plants using hybrid MACS-ADE approach

    International Nuclear Information System (INIS)

    Mo, Li; Lu, Peng; Wang, Chao; Zhou, Jianzhong

    2013-01-01

    Highlights: • MACS and ADE algorithms are hybridized as MACS-ADE method for solving STHGS problem. • An adaptive mutation is integrated into the proposed algorithm to avoid premature convergence. • MACS and ADE are run in parallel in search of better solution. • Several effective heuristic strategies are designed for dealing with various constraints of STHGS problem. - Abstract: Short-term hydro generation scheduling (STHGS) aims at determining optimal hydro generation scheduling to obtain minimum water consumption for one day or week while meeting various system constraints. In this paper, the STHGS problem is decomposed into two sub-problems: (i) unit commitment (UC) sub-problem; (ii) economic load dispatch (ELD) sub-problem. Then, we present a hybrid algorithm based on multi ant colony system (MACS) and differential evolution (DE) for solving the STHGS problem. First, MACS is used for dealing with UC sub-problem. A set of cooperating ant colonies cooperate to choose the unit state over the scheduled time horizon. Then, the adaptive differential evolution (ADE) is used to solve ELD sub-problem. MACS and ADE are run in parallel with adjusting their solutions in search of a better solution. Meanwhile, local and global pheromone updating rules in MACS and adaptive dynamic parameter adjusting strategy in DE are applied for enhancing the search ability of MACS-ADE. Finally, the proposed method is implemented to solve STHGS problem of Three Gorges–Gezhouba cascaded hydropower plants to verify the feasibility and effectiveness. Compared with other established methods, the simulation results reveal that the proposed MACS-ADE approach has the best convergence property, computational efficiency with less water consumption

  3. Pseudo-random Trees: Multiple Independent Sequence Generators for Parallel and Branching Computations

    Science.gov (United States)

    Halton, John H.

    1989-09-01

    A class of families of linear congruential pseudo-random sequences is defined, for which it is possible to branch at any event without changing the sequence of random numbers used in the original random walk and for which the sequences in different branches show properties analogous to mutual statistical independence. This is a hitherto unavailable, and computationally desirable, tool.

  4. NRC comprehensive records disposition schedule

    International Nuclear Information System (INIS)

    1982-07-01

    Effective January 1, 1982, NRC will institute records retention and disposal practices in accordance with the approved Comprehensive Records Disposition Schedule (CRDS). CRDS is comprised of NRC Schedules (NRCS) 1 to 4 which apply to the agency's program or substantive records and General Records Schedules (GRS) 1 to 22 which apply to housekeeping or facilitative records. The schedules are assembled functionally/organizationally to facilitate their use. Preceding the records descriptions and disposition instructions for both NRCS and GRS, there are brief statements on the organizational units which accumulate the records in each functional area, and other information regarding the schedules' applicability

  5. Optimal data replication: A new approach to optimizing parallel EM algorithms on a mesh-connected multiprocessor for 3D PET image reconstruction

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.

    1995-01-01

    The EM algorithm promises an estimated image with the maximal likelihood for 3D PET image reconstruction. However, due to its long computation time, the EM algorithm has not been widely used in practice. While several parallel implementations of the EM algorithm have been developed to make the EM algorithm feasible, they do not guarantee an optimal parallelization efficiency. In this paper, the authors propose a new parallel EM algorithm which maximizes the performance by optimizing data replication on a mesh-connected message-passing multiprocessor. To optimize data replication, the authors have formally derived the optimal allocation of shared data, group sizes, integration and broadcasting of replicated data as well as the scheduling of shared data accesses. The proposed parallel EM algorithm has been implemented on an iPSC/860 with 16 PEs. The experimental and theoretical results, which are consistent with each other, have shown that the proposed parallel EM algorithm could improve performance substantially over those using unoptimized data replication

  6. Diagnosing Autism Spectrum Disorders in Adults : the Use of Autism Diagnostic Observation Schedule (ADOS) Module 4

    NARCIS (Netherlands)

    Bastiaansen, Jojanneke A.; Meffert, Harma; Hein, Simone; Huizinga, Petra; Ketelaars, Cees; Pijnenborg, Marieke; Bartels, Arnold; Minderaa, Ruud; Keysers, Christian; de Bildt, Annelies

    Autism Diagnostic Observation Schedule (ADOS) module 4 was investigated in an independent sample of high-functioning adult males with an autism spectrum disorder (ASD) compared to three specific diagnostic groups: schizophrenia, psychopathy, and typical development. ADOS module 4 proves to be a

  7. Schedule optimization study implementation plan

    International Nuclear Information System (INIS)

    1993-11-01

    This Implementation Plan is intended to provide a basis for improvements in the conduct of the Environmental Restoration (ER) Program at Hanford. The Plan is based on the findings of the Schedule Optimization Study (SOS) team which was convened for two weeks in September 1992 at the request of the U.S. Department of Energy (DOE) Richland Operations Office (RL). The need for the study arose out of a schedule dispute regarding the submission of the 1100-EM-1 Operable Unit (OU) Remedial Investigation/Feasibility Study (RI/FS) Work Plan. The SOS team was comprised of independent professionals from other federal agencies and the private sector experienced in environmental restoration within the federal system. The objective of the team was to examine reasons for the lengthy RI/FS process and recommend ways to expedite it. The SOS team issued their Final Report in December 1992. The report found the most serious impediments to cleanup relate to a series of management and policy issues which are within the control of the three parties managing and monitoring Hanford -- the DOE, U.S. Environmental Protection Agency (EPA), and the State of Washington Department of Ecology (Ecology). The SOS Report identified the following eight cross-cutting issues as the root of major impediments to the Hanford Site cleanup. Each of these eight issues is quoted from the SOS Report followed by a brief, general description of the proposed approach being developed

  8. Scheduling the powering tests

    CERN Document Server

    Barbero-Soto, E; Casas-Lino, M P; Fernandez-Robles, C; Foraz, K; Pojer, M; Saban, R; Schmidt, R; Solfaroli-Camillocci, M; Vergara-Fernandez, A

    2008-01-01

    The Large Hadron Collider is now entering in its final phase before receiving beam, and the activities at CERN between 2007 and 2008 have shifted from installation work to the commissioning of the technical systems ("hardware commissioning"). Due to the unprecedented complexity of this machine, all the systems are or will be tested as far as possible before the cool-down starts. Systems are firstly tested individually before being globally tested together. The architecture of LHC, which is partitioned into eight cryogenically and electrically independent sectors, allows the commissioning on a sector by sector basis. When a sector reaches nominal cryogenic conditions, commissioning of the magnet powering system to nominal current for all magnets can be performed. This paper briefly describes the different activities to be performed during the powering tests of the superconducting magnet system and presents the scheduling issues raised by co-activities as well as the management of resources.

  9. Regional cooperation planning. Project planning for JAEA/SNL regional cooperation on remote monitoring

    International Nuclear Information System (INIS)

    Olsen, John

    2006-01-01

    Developing cooperation between the JAEA's NPSTC and the NNCA may take advantage of bilateral activities between those parties and SNL. The merger of JNC and JAERI has affected the schedule for JAEA/SNL cooperation. Also, the evolution of the NNCA as an independent agency has slowed the projected schedule for cooperation between the JAEA and the NNCA. A potential schedule for establishment of a quadrilateral remote monitoring system may include interim activities, securing an agreement of some type, and actual establishment of VPN links. A parallel schedule might exist for informing other regional parties and gaining their interest. (author)

  10. Clustering and Genetic Algorithm Based Hybrid Flowshop Scheduling with Multiple Operations

    Directory of Open Access Journals (Sweden)

    Yingfeng Zhang

    2014-01-01

    Full Text Available This research is motivated by a flowshop scheduling problem of our collaborative manufacturing company for aeronautic products. The heat-treatment stage (HTS and precision forging stage (PFS of the case are selected as a two-stage hybrid flowshop system. In HTS, there are four parallel machines and each machine can process a batch of jobs simultaneously. In PFS, there are two machines. Each machine can install any module of the four modules for processing the workpeices with different sizes. The problem is characterized by many constraints, such as batching operation, blocking environment, and setup time and working time limitations of modules, and so forth. In order to deal with the above special characteristics, the clustering and genetic algorithm is used to calculate the good solution for the two-stage hybrid flowshop problem. The clustering is used to group the jobs according to the processing ranges of the different modules of PFS. The genetic algorithm is used to schedule the optimal sequence of the grouped jobs for the HTS and PFS. Finally, a case study is used to demonstrate the efficiency and effectiveness of the designed genetic algorithm.

  11. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    Science.gov (United States)

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  12. From non-preemptive to preemptive scheduling using synchronization synthesis.

    Science.gov (United States)

    Černý, Pavol; Clarke, Edmund M; Henzinger, Thomas A; Radhakrishna, Arjun; Ryzhyk, Leonid; Samanta, Roopsha; Tarrach, Thorsten

    2017-01-01

    We present a computer-aided programming approach to concurrency. The approach allows programmers to program assuming a friendly, non-preemptive scheduler, and our synthesis procedure inserts synchronization to ensure that the final program works even with a preemptive scheduler. The correctness specification is implicit, inferred from the non-preemptive behavior. Let us consider sequences of calls that the program makes to an external interface. The specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We guarantee that our synthesis does not introduce deadlocks and that the synchronization inserted is optimal w.r.t. a given objective function. The solution is based on a finitary abstraction, an algorithm for bounded language inclusion modulo an independence relation, and generation of a set of global constraints over synchronization placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronization placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronization solution. We apply the approach to device-driver programming, where the driver threads call the software interface of the device and the API provided by the operating system. Our experiments demonstrate that our synthesis method is precise and efficient. The implicit specification helped us find one concurrency bug previously missed when model-checking using an explicit, user-provided specification. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronization placements are produced for our experiments, favoring a minimal number of synchronization operations or maximum concurrency, respectively.

  13. Unifying practice schedules in the timescales of motor learning and performance.

    Science.gov (United States)

    Verhoeven, F Martijn; Newell, Karl M

    2018-06-01

    In this article, we elaborate from a multiple time scales model of motor learning to examine the independent and integrated effects of massed and distributed practice schedules within- and between-sessions on the persistent (learning) and transient (warm-up, fatigue) processes of performance change. The timescales framework reveals the influence of practice distribution on four learning-related processes: the persistent processes of learning and forgetting, and the transient processes of warm-up decrement and fatigue. The superposition of the different processes of practice leads to a unified set of effects for massed and distributed practice within- and between-sessions in learning motor tasks. This analysis of the interaction between the duration of the interval of practice trials or sessions and parameters of the introduced time scale model captures the unified influence of the between trial and session scheduling of practice on learning and performance. It provides a starting point for new theoretically based hypotheses, and the scheduling of practice that minimizes the negative effects of warm-up decrement, fatigue and forgetting while exploiting the positive effects of learning and retention. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Differentiating social and personal power: opposite effects on stereotyping, but parallel effects on behavioral approach tendencies.

    Science.gov (United States)

    Lammers, Joris; Stoker, Janka I; Stapel, Diederik A

    2009-12-01

    How does power affect behavior? We posit that this depends on the type of power. We distinguish between social power (power over other people) and personal power (freedom from other people) and argue that these two types of power have opposite associations with independence and interdependence. We propose that when the distinction between independence and interdependence is relevant, social power and personal power will have opposite effects; however, they will have parallel effects when the distinction is irrelevant. In two studies (an experimental study and a large field study), we demonstrate this by showing that social power and personal power have opposite effects on stereotyping, but parallel effects on behavioral approach.

  15. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  16. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  17. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  18. It Is Not Just about the Schedule: Key Factors in Effective Reference Desk Scheduling and Management

    Science.gov (United States)

    Sciammarella, Susan; Fernandes, Maria Isabel; McKay, Devin

    2008-01-01

    Reference desk scheduling is one of the most challenging tasks in the organizational structure of an academic library. The ability to turn this challenge into a workable and effective function lies with the scheduler and indirectly the cooperation of all librarians scheduled for reference desk service. It is the scheduler's sensitivity to such…

  19. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  20. Schedules of Controlled Substances: Temporary Placement of ortho-Fluorofentanyl, Tetrahydrofuranyl Fentanyl, and Methoxyacetyl Fentanyl Into Schedule I. Temporary amendment; temporary scheduling order.

    Science.gov (United States)

    2017-10-26

    The Administrator of the Drug Enforcement Administration is issuing this temporary scheduling order to schedule the synthetic opioids, N-(2-fluorophenyl)-N-(1-phenethylpiperidin-4-yl)propionamide (ortho-fluorofentanyl or 2-fluorofentanyl), N-(1-phenethylpiperidin-4-yl)-N-phenyltetrahydrofuran-2-carboxamide (tetrahydrofuranyl fentanyl), and 2-methoxy-N-(1-phenethylpiperidin-4-yl)-N-phenylacetamide (methoxyacetyl fentanyl), into Schedule I. This action is based on a finding by the Administrator that the placement of ortho-fluorofentanyl, tetrahydrofuranyl fentanyl, and methoxyacetyl fentanyl into Schedule I of the Controlled Substances Act is necessary to avoid an imminent hazard to the public safety. As a result of this order, the regulatory controls and administrative, civil, and criminal sanctions applicable to Schedule I controlled substances will be imposed on persons who handle (manufacture, distribute, reverse distribute, import, export, engage in research, conduct instructional activities or chemical analysis, or possess), or propose to handle, ortho-fluorofentanyl, tetrahydrofuranyl fentanyl, and methoxyacetyl fentanyl.

  1. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  2. DATA TRANSFER IN THE AUTOMATED SYSTEM OF PARALLEL DESIGN AND CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Volkov Andrey Anatol'evich

    2012-12-01

    Full Text Available This article covers data transfer processes in the automated system of parallel design and construction. The authors consider the structure of reports used by contractors and clients when large-scale projects are implemented. All necessary items of information are grouped into three levels, and each level is described by certain attributes. The authors drive a lot of attention to the integrated operational schedule as it is the main tool of project management. Some recommendations concerning the forms and the content of reports are presented. Integrated automation of all operations is a necessary condition for the successful implementation of the new concept. The technical aspect of the notion of parallel design and construction also includes the client-to-server infrastructure that brings together all process implemented by the parties involved into projects. This approach should be taken into consideration in the course of review of existing codes and standards to eliminate any inconsistency between the construction legislation and the practical experience of engineers involved into the process.

  3. Schedule-Aware Workflow Management Systems

    Science.gov (United States)

    Mans, Ronny S.; Russell, Nick C.; van der Aalst, Wil M. P.; Moleman, Arnold J.; Bakker, Piet J. M.

    Contemporary workflow management systems offer work-items to users through specific work-lists. Users select the work-items they will perform without having a specific schedule in mind. However, in many environments work needs to be scheduled and performed at particular times. For example, in hospitals many work-items are linked to appointments, e.g., a doctor cannot perform surgery without reserving an operating theater and making sure that the patient is present. One of the problems when applying workflow technology in such domains is the lack of calendar-based scheduling support. In this paper, we present an approach that supports the seamless integration of unscheduled (flow) and scheduled (schedule) tasks. Using CPN Tools we have developed a specification and simulation model for schedule-aware workflow management systems. Based on this a system has been realized that uses YAWL, Microsoft Exchange Server 2007, Outlook, and a dedicated scheduling service. The approach is illustrated using a real-life case study at the AMC hospital in the Netherlands. In addition, we elaborate on the experiences obtained when developing and implementing a system of this scale using formal techniques.

  4. Revisiting Symbiotic Job Scheduling

    OpenAIRE

    Eyerman , Stijn; Michaud , Pierre; Rogiest , Wouter

    2015-01-01

    International audience; —Symbiotic job scheduling exploits the fact that in a system with shared resources, the performance of jobs is impacted by the behavior of other co-running jobs. By coscheduling combinations of jobs that have low interference, the performance of a system can be increased. In this paper, we investigate the impact of using symbiotic job scheduling for increasing throughput. We find that even for a theoretically optimal scheduler, this impact is very low, despite the subs...

  5. Development of parallel algorithms for electrical power management in space applications

    Science.gov (United States)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  6. SPANR planning and scheduling

    Science.gov (United States)

    Freund, Richard F.; Braun, Tracy D.; Kussow, Matthew; Godfrey, Michael; Koyama, Terry

    2001-07-01

    SPANR (Schedule, Plan, Assess Networked Resources) is (i) a pre-run, off-line planning and (ii) a runtime, just-in-time scheduling mechanism. It is designed to support primarily commercial applications in that it optimizes throughput rather than individual jobs (unless they have highest priority). Thus it is a tool for a commercial production manager to maximize total work. First the SPANR Planner is presented showing the ability to do predictive 'what-if' planning. It can answer such questions as, (i) what is the overall effect of acquiring new hardware or (ii) what would be the effect of a different scheduler. The ability of the SPANR Planner to formulate in advance tree-trimming strategies is useful in several commercial applications, such as electronic design or pharmaceutical simulations. The SPANR Planner is demonstrated using a variety of benchmarks. The SPANR Runtime Scheduler (RS) is briefly presented. The SPANR RS can provide benefit for several commercial applications, such as airframe design and financial applications. Finally a design is shown whereby SPANR can provide scheduling advice to most resource management systems.

  7. Self-scheduling with Microsoft Excel.

    Science.gov (United States)

    Irvin, S A; Brown, H N

    1999-01-01

    Excessive time was being spent by the emergency department (ED) staff, head nurse, and unit secretary on a complex 6-week manual self-scheduling system. This issue, plus inevitable errors and staff dissatisfaction, resulted in a manager-lead initiative to automate elements of the scheduling process using Microsoft Excel. The implementation of this initiative included: common coding of all 8-hour and 12-hour shifts, with each 4-hour period represented by a cell; the creation of a 6-week master schedule using the "count-if" function of Excel based on current staffing guidelines; staff time-off requests then entered by the department secretary; the head nurse, with staff input, then fine-tuned the schedule to provide even unit coverage. Outcomes of these changes included an increase in staff satisfaction, time saved by the head nurse, and staff work time saved because there was less arguing about the schedule. Ultimately, the automated self-scheduling method was expanded to the entire 700-bed hospital.

  8. Stochastic short-term maintenance scheduling of GENCOs in an oligopolistic electricity market

    International Nuclear Information System (INIS)

    Fotouhi Ghazvini, Mohammad Ali; Canizes, Bruno; Vale, Zita; Morais, Hugo

    2013-01-01

    Highlights: ► Decision making under uncertainty. ► Stochastic Mixed Integer Quadratic Programming applied to short-term maintenance scheduling. ► Outage scheduling in Oligopolistic electricity markets. ► Generation companies maintenance scheduling. -- Abstract: In the proposed model, the independent system operator (ISO) provides the opportunity for maintenance outage rescheduling of generating units before each short-term (ST) time interval. Long-term (LT) scheduling for 1 or 2 years in advance is essential for the ISO and the generation companies (GENCOs) to decide their LT strategies; however, it is not possible to be exactly followed and requires slight adjustments. The Cournot-Nash equilibrium is used to characterize the decision-making procedure of an individual GENCO for ST intervals considering the effective coordination with LT plans. Random inputs, such as parameters of the demand function of loads, hourly demand during the following ST time interval and the expected generation pattern of the rivals, are included as scenarios in the stochastic mixed integer program defined to model the payoff-maximizing objective of a GENCO. Scenario reduction algorithms are used to deal with the computational burden. Two reliability test systems were chosen to illustrate the effectiveness of the proposed model for the ST decision-making process for future planned outages from the point of view of a GENCO.

  9. A master surgical scheduling approach for cyclic scheduling in operating room departments

    NARCIS (Netherlands)

    van Oostrum, Jeroen M.; van Houdenhoven, M.; Hurink, Johann L.; Hans, Elias W.; Wullink, Gerhard; Kazemier, G.

    This paper addresses the problem of operating room (OR) scheduling at the tactical level of hospital planning and control. Hospitals repetitively construct operating room schedules, which is a time-consuming, tedious, and complex task. The stochasticity of the durations of surgical procedures

  10. Preparing the Gaudi framework and the DIRAC WMS for multicore job submission

    International Nuclear Information System (INIS)

    Rauschmayr, N; Streit, A

    2014-01-01

    HEP applications need to adapt to the continuously increasing number of cores on modern CPUs. This must be done at different levels: the software must support parallelization, and the scheduling has to differ between multicore and singlecore jobs. The LHCb software framework (GAUDI) provides a parallel prototype (GaudiMP), based on the multiprocessing approach. It allows a reduction of the overall memory footprint and a coordinated access to data via separated reader and writer processes. A comparison between the parallel prototype and multiple independent Gaudi jobs in respect of CPU time and memory consumption will be shown. Furthermore, speedup must be predicted in order to find the limit beyond which the parallel prototype (GaudiMP) does not bring further scaling. This number must be known as it indicates the point, where new technologies must be introduced into the software framework. In order to reach further improvements in the overall throughput, scheduling strategies for mixing parallel jobs can be applied. It allows overcoming limitations in the speedup of the parallel prototype. Those changes require modifications at the level of the Workload Management System (DIRAC).

  11. Immunization Schedules for Adults

    Science.gov (United States)

    ... ACIP Vaccination Recommendations Why Immunize? Vaccines: The Basics Immunization Schedule for Adults (19 Years of Age and ... diseases that can be prevented by vaccines . 2018 Immunization Schedule Recommended Vaccinations for Adults by Age and ...

  12. Instant Childhood Immunization Schedule

    Science.gov (United States)

    ... Recommendations Why Immunize? Vaccines: The Basics Instant Childhood Immunization Schedule Recommend on Facebook Tweet Share Compartir Get ... date. See Disclaimer for additional details. Based on Immunization Schedule for Children 0 through 6 Years of ...

  13. Cosmic Shear With ACS Pure Parallels

    Science.gov (United States)

    Rhodes, Jason

    2002-07-01

    Small distortions in the shapes of background galaxies by foreground mass provide a powerful method of directly measuring the amount and distribution of dark matter. Several groups have recently detected this weak lensing by large-scale structure, also called cosmic shear. The high resolution and sensitivity of HST/ACS provide a unique opportunity to measure cosmic shear accurately on small scales. Using 260 parallel orbits in Sloan textiti {F775W} we will measure for the first time: beginlistosetlength sep0cm setlengthemsep0cm setlengthopsep0cm em the cosmic shear variance on scales Omega_m^0.5, with signal-to-noise {s/n} 20, and the mass density Omega_m with s/n=4. They will be done at small angular scales where non-linear effects dominate the power spectrum, providing a test of the gravitational instability paradigm for structure formation. Measurements on these scales are not possible from the ground, because of the systematic effects induced by PSF smearing from seeing. Having many independent lines of sight reduces the uncertainty due to cosmic variance, making parallel observations ideal.

  14. Nontraditional work schedules for pharmacists.

    Science.gov (United States)

    Mahaney, Lynnae; Sanborn, Michael; Alexander, Emily

    2008-11-15

    Nontraditional work schedules for pharmacists at three institutions are described. The demand for pharmacists and health care in general continues to increase, yet significant material changes are occurring in the pharmacy work force. These changing demographics, coupled with historical vacancy rates and turnover trends for pharmacy staff, require an increased emphasis on workplace changes that can improve staff recruitment and retention. At William S. Middleton Memorial Veterans Affairs Hospital in Madison, Wisconsin, creative pharmacist work schedules and roles are now mainstays to the recruitment and retention of staff. The major challenge that such scheduling presents is the 8 hours needed to prepare a six-week schedule. Baylor Medical Center at Grapevine in Dallas, Texas, has a total of 45 pharmacy employees, and slightly less than half of the 24.5 full-time-equivalent staff work full-time, with most preferring to work one, two, or three days per week. As long as the coverage needs of the facility are met, Envision Telepharmacy in Alpine, Texas, allows almost any scheduling arrangement preferred by individual pharmacists or the pharmacist group covering the facility. Staffing involves a great variety of shift lengths and intervals, with shifts ranging from 2 to 10 hours. Pharmacy leaders must be increasingly aware of opportunities to provide staff with unique scheduling and operational enhancements that can provide for a better work-life balance. Compressed workweeks, job-sharing, and team scheduling were the most common types of alternative work schedules implemented at three different institutions.

  15. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  16. Both the caspase CSP-1 and a caspase-independent pathway promote programmed cell death in parallel to the canonical pathway for apoptosis in Caenorhabditis elegans.

    Directory of Open Access Journals (Sweden)

    Daniel P Denning

    Full Text Available Caspases are cysteine proteases that can drive apoptosis in metazoans and have critical functions in the elimination of cells during development, the maintenance of tissue homeostasis, and responses to cellular damage. Although a growing body of research suggests that programmed cell death can occur in the absence of caspases, mammalian studies of caspase-independent apoptosis are confounded by the existence of at least seven caspase homologs that can function redundantly to promote cell death. Caspase-independent programmed cell death is also thought to occur in the invertebrate nematode Caenorhabditis elegans. The C. elegans genome contains four caspase genes (ced-3, csp-1, csp-2, and csp-3, of which only ced-3 has been demonstrated to promote apoptosis. Here, we show that CSP-1 is a pro-apoptotic caspase that promotes programmed cell death in a subset of cells fated to die during C. elegans embryogenesis. csp-1 is expressed robustly in late pachytene nuclei of the germline and is required maternally for its role in embryonic programmed cell deaths. Unlike CED-3, CSP-1 is not regulated by the APAF-1 homolog CED-4 or the BCL-2 homolog CED-9, revealing that csp-1 functions independently of the canonical genetic pathway for apoptosis. Previously we demonstrated that embryos lacking all four caspases can eliminate cells through an extrusion mechanism and that these cells are apoptotic. Extruded cells differ from cells that normally undergo programmed cell death not only by being extruded but also by not being engulfed by neighboring cells. In this study, we identify in csp-3; csp-1; csp-2 ced-3 quadruple mutants apoptotic cell corpses that fully resemble wild-type cell corpses: these caspase-deficient cell corpses are morphologically apoptotic, are not extruded, and are internalized by engulfing cells. We conclude that both caspase-dependent and caspase-independent pathways promote apoptotic programmed cell death and the phagocytosis of cell

  17. BIM-BASED SCHEDULING OF CONSTRUCTION

    DEFF Research Database (Denmark)

    Andersson, Niclas; Büchmann-Slorup, Rolf

    2010-01-01

    The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM...... and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve...

  18. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  19. A highly scalable massively parallel fast marching method for the Eikonal equation

    Science.gov (United States)

    Yang, Jianming; Stern, Frederick

    2017-03-01

    The fast marching method is a widely used numerical method for solving the Eikonal equation arising from a variety of scientific and engineering fields. It is long deemed inherently sequential and an efficient parallel algorithm applicable to large-scale practical applications is not available in the literature. In this study, we present a highly scalable massively parallel implementation of the fast marching method using a domain decomposition approach. Central to this algorithm is a novel restarted narrow band approach that coordinates the frequency of communications and the amount of computations extra to a sequential run for achieving an unprecedented parallel performance. Within each restart, the narrow band fast marching method is executed; simple synchronous local exchanges and global reductions are adopted for communicating updated data in the overlapping regions between neighboring subdomains and getting the latest front status, respectively. The independence of front characteristics is exploited through special data structures and augmented status tags to extract the masked parallelism within the fast marching method. The efficiency, flexibility, and applicability of the parallel algorithm are demonstrated through several examples. These problems are extensively tested on six grids with up to 1 billion points using different numbers of processes ranging from 1 to 65536. Remarkable parallel speedups are achieved using tens of thousands of processes. Detailed pseudo-codes for both the sequential and parallel algorithms are provided to illustrate the simplicity of the parallel implementation and its similarity to the sequential narrow band fast marching algorithm.

  20. Diagnosing Autism Spectrum Disorders in Adults: The Use of Autism Diagnostic Observation Schedule (ADOS) Module 4

    Science.gov (United States)

    Bastiaansen, Jojanneke A.; Meffert, Harma; Hein, Simone; Huizinga, Petra; Ketelaars, Cees; Pijnenborg, Marieke; Bartels, Arnold; Minderaa, Ruud; Keysers, Christian; de Bildt, Annelies

    2011-01-01

    Autism Diagnostic Observation Schedule (ADOS) module 4 was investigated in an independent sample of high-functioning adult males with an autism spectrum disorder (ASD) compared to three specific diagnostic groups: schizophrenia, psychopathy, and typical development. ADOS module 4 proves to be a reliable instrument with good predictive value. It…

  1. A framework for grand scale parallelization of the combined finite discrete element method in 2d

    Science.gov (United States)

    Lei, Z.; Rougier, E.; Knight, E. E.; Munjiza, A.

    2014-09-01

    Within the context of rock mechanics, the Combined Finite-Discrete Element Method (FDEM) has been applied to many complex industrial problems such as block caving, deep mining techniques (tunneling, pillar strength, etc.), rock blasting, seismic wave propagation, packing problems, dam stability, rock slope stability, rock mass strength characterization problems, etc. The reality is that most of these were accomplished in a 2D and/or single processor realm. In this work a hardware independent FDEM parallelization framework has been developed using the Virtual Parallel Machine for FDEM, (V-FDEM). With V-FDEM, a parallel FDEM software can be adapted to different parallel architecture systems ranging from just a few to thousands of cores.

  2. Network scheduling at Belene NPP construction site

    International Nuclear Information System (INIS)

    Matveev, A.

    2010-01-01

    Four types of schedules differing in the level of their detail are singled out to enhance the efficiency of Belene NPP Project implementation planning and monitoring: Level 1 Schedule–Summary Integrated Overall Time Schedule (SIOTS) is an appendix to EPC Contract. The main purpose of SIOTS is the large scale presentation of the current information on the Project implementation. Level 2 Schedule–Integrated Overall Time Schedule (IOTS)is the contract schedule for the Contractor (ASE JSC) and their subcontractors.The principal purpose of IOTS is the work progress planning and monitoring, the analysis of the effect of activities implementation upon the progress of the Project as a whole. IOTS is the reporting schedule at the Employer –Contractor level. Level 3 Schedules, Detail Time Schedules(DTS) are developed by those who actually perform the work and are agreed upon with Atomstroyexport JSC.The main purpose of DTS is the detail planning of Atomstroyexport subcontractor's activities. DTSare the reporting schedules at the level of Contractor-Subcontractor. Level 4 Schedules are the High Detail Time Schedules (HDTS), which are the day-to-day plans of work implementation and are developed, as a rule, for a week's time period.Each lower level time schedule details the activities of the higher level time schedule

  3. Mechanics of curved surfaces, with application to surface-parallel cracks

    Science.gov (United States)

    Martel, Stephen J.

    2011-10-01

    The surfaces of many bodies are weakened by shallow enigmatic cracks that parallel the surface. A re-formulation of the static equilibrium equations in a curvilinear reference frame shows that a tension perpendicular to a traction-free surface can arise at shallow depths even under the influence of gravity. This condition occurs if σ11k1 + σ22k2 > ρg cosβ, where k1 and k2 are the principal curvatures (negative if convex) at the surface, σ11 and σ22 are tensile (positive) or compressive (negative) stresses parallel to the respective principal curvature arcs, ρ is material density, g is gravitational acceleration, and β is the surface slope. The curvature terms do not appear in equilibrium equations in a Cartesian reference frame. Compression parallel to a convex surface thus can cause subsurface cracks to open. A quantitative test of the relationship above accounts for where sheeting joints (prominent shallow surface-parallel fractures in rock) are abundant and for where they are scarce or absent in the varied topography of Yosemite National Park, resolving key aspects of a classic problem in geology: the formation of sheeting joints. Moreover, since the equilibrium equations are independent of rheology, the relationship above can be applied to delamination or spalling caused by surface-parallel cracks in many materials.

  4. A multi-objective optimization problem for multi-state series-parallel systems: A two-stage flow-shop manufacturing system

    International Nuclear Information System (INIS)

    Azadeh, A.; Maleki Shoja, B.; Ghanei, S.; Sheikhalishahi, M.

    2015-01-01

    This research investigates a redundancy-scheduling optimization problem for a multi-state series parallel system. The system is a flow shop manufacturing system with multi-state machines. Each manufacturing machine may have different performance rates including perfect performance, decreased performance and complete failure. Moreover, warm standby redundancy is considered for the redundancy allocation problem. Three objectives are considered for the problem: (1) minimizing system purchasing cost, (2) minimizing makespan, and (3) maximizing system reliability. Universal generating function is employed to evaluate system performance and overall reliability of the system. Since the problem is in the NP-hard class of combinatorial problems, genetic algorithm (GA) is used to find optimal/near optimal solutions. Different test problems are generated to evaluate the effectiveness and efficiency of proposed approach and compared to simulated annealing optimization method. The results show the proposed approach is capable of finding optimal/near optimal solution within a very reasonable time. - Highlights: • A redundancy-scheduling optimization problem for a multi-state series parallel system. • A flow shop with multi-state machines and warm standby redundancy. • Objectives are to optimize system purchasing cost, makespan and reliability. • Different test problems are generated and evaluated by a unique genetic algorithm. • It locates optimal/near optimal solution within a very reasonable time

  5. Parents' Family Time and Work Schedules: The Split-Shift Schedule in Spain

    NARCIS (Netherlands)

    Gracia, P.; Kalmijn, M.

    2016-01-01

    This study used data on couples from the 2003 Spanish Time Use Survey (N = 1,416) to analyze how work schedules are associated with family, couple, parent–child, and non-family leisure activities. Spain is clearly an interesting case for the institutionalized split-shift schedule, a long lunch break

  6. Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems

    Science.gov (United States)

    Chen, Hsin-Chu; He, Ai-Fang

    1993-01-01

    The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.

  7. A bi-objective integer programming model for partly-restricted flight departure scheduling.

    Science.gov (United States)

    Zhong, Han; Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling

    2018-01-01

    The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows.

  8. On the impact of communication complexity in the design of parallel numerical algorithms

    Science.gov (United States)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  9. Analyzing scheduling in the food-processing industry

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter

    2009-01-01

    Production scheduling has been widely studied in several research areas, resulting in a large number of methods, prescriptions, and approaches. However, the impact on scheduling practice seems relatively low. This is also the case in the food-processing industry, where industry......-specific characteristics induce specific and complex scheduling problems. Based on ideas about decomposition of the scheduling task and the production process, we develop an analysis methodology for scheduling problems in food processing. This combines an analysis of structural (technological) elements of the production...... process with an analysis of the tasks of the scheduler. This helps to understand, describe, and structure scheduling problems in food processing, and forms a basis for improving scheduling and applying methods developed in literature. It also helps in evaluating the organisational structures...

  10. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    Science.gov (United States)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO

  11. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  12. Future aircraft networks and schedules

    Science.gov (United States)

    Shu, Yan

    2011-07-01

    Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents

  13. Conception of Self-Construction Production Scheduling System

    Science.gov (United States)

    Xue, Hai; Zhang, Xuerui; Shimizu, Yasuhiro; Fujimura, Shigeru

    With the high speed innovation of information technology, many production scheduling systems have been developed. However, a lot of customization according to individual production environment is required, and then a large investment for development and maintenance is indispensable. Therefore now the direction to construct scheduling systems should be changed. The final objective of this research aims at developing a system which is built by it extracting the scheduling technique automatically through the daily production scheduling work, so that an investment will be reduced. This extraction mechanism should be applied for various production processes for the interoperability. Using the master information extracted by the system, production scheduling operators can be supported to accelerate the production scheduling work easily and accurately without any restriction of scheduling operations. By installing this extraction mechanism, it is easy to introduce scheduling system without a lot of expense for customization. In this paper, at first a model for expressing a scheduling problem is proposed. Then the guideline to extract the scheduling information and use the extracted information is shown and some applied functions are also proposed based on it.

  14. Spontaneous bimanual independence during parallel tapping and sawing

    Science.gov (United States)

    Baber, Chris

    2017-01-01

    The performance of complex polyrhythms—rhythms where the left and right hand move at different rates—is usually the province of highly trained individuals. However, studies in which hand movement is guided haptically show that even novices can perform polyrhythms with no or only brief training. In this study, we investigated whether novices are able to tap with one hand by matching different rates of a metronome while sawing with the other hand. This experiment was based on the assumption that saw movement is controlled consistently at a predictable rate without the need for paying primary attention to it. It would follow that consciously matching different stipulated metronome rates with the other hand would result in the spontaneous performance of polyrhythms. Six experimental conditions were randomised: single handed tapping and sawing as well as four bimanual conditions with expected ratios of 1:1 (performed with and without matching a metronome) as well as 3:4 and 4:3 (performed matching a metronome). Results showed that participants executed the saw movement at a consistent cycle duration of 0.44 [0.20] s to 0.51 [0.19] s across single and bimanual conditions, with no significant effect of the condition on the cycle duration (p = 0.315). Similarly, free tapping was executed at a cycle duration of 0.48 [0.22] s. In the bimanual conditions, we found that for a ratio of 4:3 (4 taps against 3 sawing cycles per measure), the observed and predicted ratio of 0.75 were not significantly different (p = 0.369), supporting our hypothesis of the spontaneous adoption of polyrhythms. However, for a ratio of 3:4 (3 taps against 4 sawing cycles per measure), the observed and predicted ratio differed (p = 0.016), with a trend towards synchronisation. Our findings show that bimanual independence when performing complex polyrhythms can in principle be achieved if the movement of one hand can be performed without paying much—if any—attention to it. In this paradigm

  15. A parallel calibration utility for WRF-Hydro on high performance computers

    Science.gov (United States)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  16. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  17. Range Scheduling Aid (RSA)

    Science.gov (United States)

    Logan, J. R.; Pulvermacher, M. K.

    1991-01-01

    Range Scheduling Aid (RSA) is presented in the form of the viewgraphs. The following subject areas are covered: satellite control network; current and new approaches to range scheduling; MITRE tasking; RSA features; RSA display; constraint based analytic capability; RSA architecture; and RSA benefits.

  18. Parallel Simulation of Loosely Timed SystemC/TLM Programs: Challenges Raised by an Industrial Case Study

    Directory of Open Access Journals (Sweden)

    Denis Becker

    2016-05-01

    Full Text Available Transaction level models of systems-on-chip in SystemC are commonly used in the industry to provide an early simulation environment. The SystemC standard imposes coroutine semantics for the scheduling of simulated processes, to ensure determinism and reproducibility of simulations. However, because of this, sequential implementations have, for a long time, been the only option available, and still now the reference implementation is sequential. With the increasing size and complexity of models, and the multiplication of computation cores on recent machines, the parallelization of SystemC simulations is a major research concern. There have been several proposals for SystemC parallelization, but most of them are limited to cycle-accurate models. In this paper we focus on loosely timed models, which are commonly used in the industry. We present an industrial context and show that, unfortunately, most of the existing approaches for SystemC parallelization can fundamentally not apply in this context. We support this claim with a set of measurements performed on a platform used in production at STMicroelectronics. This paper surveys existing techniques, presents a visualization and profiling tool and identifies unsolved challenges in the parallelization of SystemC models at transaction level.

  19. On the Lyapunov stability of a plane parallel convective flow of a binary mixture

    Directory of Open Access Journals (Sweden)

    Giuseppe Mulone

    1991-05-01

    Full Text Available The nonlinear stability of plane parallel convective flows of a binary fluid mixture in the Oberbeck-Boussinesq scheme is studied in the stress-free boundary case. Nonlinear stability conditions independent of Reynolds number are proved.

  20. The Parallels between Admissions to Independent Boarding Schools and Admissions to Selective Universities

    Science.gov (United States)

    Hillman, Nicholas

    2014-01-01

    In England, as in many other countries, selective universities have been under pressure to show there are no financial barriers for high-potential students from less-advantaged backgrounds. For much of the twentieth century, there was a similarly lively debate about how to open up Britain's prestigious independent boarding schools to a wider…

  1. Immunization Schedules for Infants and Children

    Science.gov (United States)

    ... ACIP Vaccination Recommendations Why Immunize? Vaccines: The Basics Immunization Schedule for Infants and Children (Birth through 6 ... any questions please talk to your doctor. 2018 Immunization Schedule Recommended Vaccinations for Infants and Children Schedule ...

  2. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    Science.gov (United States)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the

  3. A Formal Product-Line Engineering Approach for Schedulers

    NARCIS (Netherlands)

    Orhan, Güner; Aksit, Mehmet; Rensink, Arend; Jololian, Leon; Robbins, David E.; Fernandes, Steven L.

    2017-01-01

    Scheduling techniques have been applied to a large category of software systems, such as, processor scheduling in operating systems, car scheduling in elevator systems, facility scheduling at airports, antenna scheduling in radar systems, scheduling of events, control signals and data in

  4. Robust and Flexible Scheduling with Evolutionary Computation

    DEFF Research Database (Denmark)

    Jensen, Mikkel T.

    Over the last ten years, there have been numerous applications of evolutionary algorithms to a variety of scheduling problems. Like most other research on heuristic scheduling, the primary aim of the research has been on deterministic formulations of the problems. This is in contrast to real world...... scheduling problems which are usually not deterministic. Usually at the time the schedule is made some information about the problem and processing environment is available, but this information is uncertain and likely to change during schedule execution. Changes frequently encountered in scheduling...... environments include machine breakdowns, uncertain processing times, workers getting sick, materials being delayed and the appearance of new jobs. These possible environmental changes mean that a schedule which was optimal for the information available at the time of scheduling can end up being highly...

  5. Enhancing parallelism of tile bidiagonal transformation on multicore architectures using tree reduction

    KAUST Repository

    Ltaief, Hatem

    2012-01-01

    The objective of this paper is to enhance the parallelism of the tile bidiagonal transformation using tree reduction on multicore architectures. First introduced by Ltaief et. al [LAPACK Working Note #247, 2011], the bidiagonal transformation using tile algorithms with a two-stage approach has shown very promising results on square matrices. However, for tall and skinny matrices, the inherent problem of processing the panel in a domino-like fashion generates unnecessary sequential tasks. By using tree reduction, the panel is horizontally split, which creates another dimension of parallelism and engenders many concurrent tasks to be dynamically scheduled on the available cores. The results reported in this paper are very encouraging. The new tile bidiagonal transformation, targeting tall and skinny matrices, outperforms the state-of-the-art numerical linear algebra libraries LAPACK V3.2 and Intel MKL ver. 10.3 by up to 29-fold speedup and the standard two-stage PLASMA BRD by up to 20-fold speedup, on an eight socket hexa-core AMD Opteron multicore shared-memory system. © 2012 Springer-Verlag.

  6. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    Science.gov (United States)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  7. The development of KMRR schedule and progress control system (KSPCS) for the master schedule of KMRR project

    International Nuclear Information System (INIS)

    Choi, Chang Woong; Lee, Tae Joon; Kim, Joon Yun; Cho, Yun Ho; Hah, Jong Hyun

    1993-07-01

    This report was to development the computerized schedule and progress control system for the master schedule of KMRR project with ARTEMIS 7000/386 CM (Ver. 7.4.2.) based on project management theory (PERT/CPM, PDM, and S-curve). This system has been efficiently used for KMRR master schedule and will be utilized for the detail scheduling of KMRR project. (Author) 23 refs., 26 figs., 52 tabs

  8. The development of KMRR schedule and progress control system (KSPCS) for the master schedule of KMRR project

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chang Woong; Lee, Tae Joon; Kim, Joon Yun; Cho, Yun Ho; Hah, Jong Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-07-01

    This report was to development the computerized schedule and progress control system for the master schedule of KMRR project with ARTEMIS 7000/386 CM (Ver. 7.4.2.) based on project management theory (PERT/CPM, PDM, and S-curve). This system has been efficiently used for KMRR master schedule and will be utilized for the detail scheduling of KMRR project. (Author) 23 refs., 26 figs., 52 tabs.

  9. 78 FR 21818 - Schedules of Controlled Substances: Placement of Methylone Into Schedule I

    Science.gov (United States)

    2013-04-12

    ..., methamphetamine, and MDMA, Schedule I and II substances. These effects included elevated body temperature... of reuptake of monoamines, and in vivo studies (microdialysis, locomotor activity, body temperature.... Yet another commenter claimed that Schedule I placement would ``cripple efforts at learning,'' make it...

  10. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  11. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Takemiya, Hiroshi; Kawasaki, Takuji

    1998-01-01

    In parallel processing of Monte Carlo (MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)

  12. Energy-saving scheme based on downstream packet scheduling in ethernet passive optical networks

    Science.gov (United States)

    Zhang, Lincong; Liu, Yejun; Guo, Lei; Gong, Xiaoxue

    2013-03-01

    With increasing network sizes, the energy consumption of Passive Optical Networks (PONs) has grown significantly. Therefore, it is important to design effective energy-saving schemes in PONs. Generally, energy-saving schemes have focused on sleeping the low-loaded Optical Network Units (ONUs), which tends to bring large packet delays. Further, the traditional ONU sleep modes are not capable of sleeping the transmitter and receiver independently, though they are not required to transmit or receive packets. Clearly, this approach contributes to wasted energy. Thus, in this paper, we propose an Energy-Saving scheme that is based on downstream Packet Scheduling (ESPS) in Ethernet PON (EPON). First, we design both an algorithm and a rule for downstream packet scheduling at the inter- and intra-ONU levels, respectively, to reduce the downstream packet delay. After that, we propose a hybrid sleep mode that contains not only ONU deep sleep mode but also independent sleep modes for the transmitter and the receiver. This ensures that the energy consumed by the ONUs is minimal. To realize the hybrid sleep mode, a modified GATE control message is designed that involves 10 time points for sleep processes. In ESPS, the 10 time points are calculated according to the allocated bandwidths in both the upstream and the downstream. The simulation results show that ESPS outperforms traditional Upstream Centric Scheduling (UCS) scheme in terms of energy consumption and the average delay for both real-time and non-real-time packets downstream. The simulation results also show that the average energy consumption of each ONU in larger-sized networks is less than that in smaller-sized networks; hence, our ESPS is better suited for larger-sized networks.

  13. A Procedure for scheduling and setting processing priority of MC requests

    CERN Document Server

    Balcar, Stepan

    2013-01-01

    My project contains designing and programming a base of an open system, which should help with the scheduling Monte Carlo production requests needed by the CMS physicists for data analysis within the CMS collaboration. A primary requirement was to create web interface that would be portable and independent of the control logic of the system. Another point of the project was to make a scheduler for the Monte Carlo production planning and to design and program interfaces between the various logical blocks of the system. Introduction Many research groups in CERN which specialize in different areas of particle physics works with CMS. They are mostly scientists working at universities or research institutes in their countries. Their research consists in constructing models of elementary particles and subsequent experimental verification of the behavior of these models. All these groups of people create MC production requests which are to be executed using computing resources located at CERN and other institutes. T...

  14. Decentralized Ground Staff Scheduling

    DEFF Research Database (Denmark)

    Sørensen, M. D.; Clausen, Jens

    2002-01-01

    scheduling is investigated. The airport terminal is divided into zones, where each zone consists of a set of stands geographically next to each other. Staff is assigned to work in only one zone and the staff scheduling is planned decentralized for each zone. The advantage of this approach is that the staff...... work in a smaller area of the terminal and thus spends less time walking between stands. When planning decentralized the allocation of stands to flights influences the staff scheduling since the workload in a zone depends on which flights are allocated to stands in the zone. Hence solving the problem...... depends on the actual stand allocation but also on the number of zones and the layout of these. A mathematical model of the problem is proposed, which integrates the stand allocation and the staff scheduling. A heuristic solution method is developed and applied on a real case from British Airways, London...

  15. Schedule control in Ling Ao nuclear power project

    International Nuclear Information System (INIS)

    Xie Ahai

    2007-01-01

    Ling Ao Nuclear Power Station (LANP) is first one built up by self-reliance in China with power capacity 990x2 MWe. The results of quality control, schedule control and cost control are satisfactory. The commercial operation days of Unit 1 and Unit 2 were 28th May 2002 and 8th Jan. 2003 respectively, which were 48 days and 66 days in advance of the project schedule. This paper presents the practices of self-reliance schedule control system in LANP. The paper includes 10 sections: schedule control system; targets of schedule control; schedule control at early stage of project; construction schedule; scheduling practice; Point curves; schedule control of design and procurement; a good practice of construction schedule control on site; commissioning and startup schedule; schedule control culture. Three figures are attached. The main contents of the self-reliance schedule control system are as follows: to draw up reasonable schedules and targets; to setup management mechanism and procedures; to organize powerful project management team; to establish close monitoring system; to provide timely progress reports and statistics information. Five kinds of schedule control targets are introduced, i.e. bar-chart schedule; milesones; Point curves; interface management; hydraulic test schedule of auxiliary piping loops; EMR/EMC/EESR issuance schedules. Six levels of bar-chart schedules were adopted in LANP, but the bar-chart schedules were not satisfactory for complicated erection condition on site, even using six levels of schedules. So a kind of Point curves was developed and their advantages are explained. Scheduling method of three elements: activity, duration, logic, which was adopted in LANP, is introduced. The duration of each piping activities in LANP level 2 project schedule was calculated based on the relevant working Point quantities. The analysis and adjustment of Point curves are illustrated, i.e. balance of monthly quantities; possible production in the peakload

  16. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim

    2012-01-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  17. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad

    2012-09-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  18. Modularized Parallel Neutron Instrument Simulation on the TeraGrid

    International Nuclear Information System (INIS)

    Chen, Meili; Cobb, John W.; Hagen, Mark E.; Miller, Stephen D.; Lynch, Vickie E.

    2007-01-01

    In order to build a bridge between the TeraGrid (TG), a national scale cyberinfrastructure resource, and neutron science, the Neutron Science TeraGrid Gateway (NSTG) is focused on introducing productive HPC usage to the neutron science community, primarily the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). Monte Carlo simulations are used as a powerful tool for instrument design and optimization at SNS. One of the successful efforts of a collaboration team composed of NSTG HPC experts and SNS instrument scientists is the development of a software facility named PSoNI, Parallelizing Simulations of Neutron Instruments. Parallelizing the traditional serial instrument simulation on TeraGrid resources, PSoNI quickly computes full instrument simulation at sufficient statistical levels in instrument de-sign. Upon SNS successful commissioning, to the end of 2007, three out of five commissioned instruments in SNS target station will be available for initial users. Advanced instrument study, proposal feasibility evaluation, and experiment planning are on the immediate schedule of SNS, which pose further requirements such as flexibility and high runtime efficiency on fast instrument simulation. PSoNI has been redesigned to meet the new challenges and a preliminary version is developed on TeraGrid. This paper explores the motivation and goals of the new design, and the improved software structure. Further, it describes the realized new features seen from MPI parallelized McStas running high resolution design simulations of the SEQUOIA and BSS instruments at SNS. A discussion regarding future work, which is targeted to do fast simulation for automated experiment adjustment and comparing models to data in analysis, is also presented

  19. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  20. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  1. Optimizing the Steel Plate Storage Yard Crane Scheduling Problem Using a Two Stage Planning/Scheduling Approach

    DEFF Research Database (Denmark)

    Hansen, Anders Dohn; Clausen, Jens

    This paper presents the Steel Plate Storage Yard Crane Scheduling Problem. The task is to generate a schedule for two gantry cranes sharing tracks. The schedule must comply with a number of constraints and at the same time be cost efficient. We propose some ideas for a two stage planning...

  2. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    Science.gov (United States)

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  3. The triangle scheduling problem

    NARCIS (Netherlands)

    Dürr, Christoph; Hanzálek, Zdeněk; Konrad, Christian; Seddik, Yasmina; Sitters, R.A.; Vásquez, Óscar C.; Woeginger, Gerhard

    2017-01-01

    This paper introduces a novel scheduling problem, where jobs occupy a triangular shape on the time line. This problem is motivated by scheduling jobs with different criticality levels. A measure is introduced, namely the binary tree ratio. It is shown that the Greedy algorithm solves the problem to

  4. An inherently parallel method for solving discretized diffusion equations

    International Nuclear Information System (INIS)

    Eccleston, B.R.; Palmer, T.S.

    1999-01-01

    A Monte Carlo approach to solving linear systems of equations is being investigated in the context of the solution of discretized diffusion equations. While the technique was originally devised decades ago, changes in computer architectures (namely, massively parallel machines) have driven the authors to revisit this technique. There are a number of potential advantages to this approach: (1) Analog Monte Carlo techniques are inherently parallel; this is not necessarily true to today's more advanced linear equation solvers (multigrid, conjugate gradient, etc.); (2) Some forms of this technique are adaptive in that they allow the user to specify locations in the problem where resolution is of particular importance and to concentrate the work at those locations; and (3) These techniques permit the solution of very large systems of equations in that matrix elements need not be stored. The user could trade calculational speed for storage if elements of the matrix are calculated on the fly. The goal of this study is to compare the parallel performance of Monte Carlo linear solvers to that of a more traditional parallelized linear solver. The authors observe the linear speedup that they expect from the Monte Carlo algorithm, given that there is no domain decomposition to cause significant communication overhead. Overall, PETSc outperforms the Monte Carlo solver for the test problem. The PETSc parallel performance improves with larger numbers of unknowns for a given number of processors. Parallel performance of the Monte Carlo technique is independent of the size of the matrix and the number of processes. They are investigating modifications to the scheme to accommodate matrix problems with positive off-diagonal elements. They are also currently coding an on-the-fly version of the algorithm to investigate the solution of very large linear systems

  5. A System for Automatically Generating Scheduling Heuristics

    Science.gov (United States)

    Morris, Robert

    1996-01-01

    The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.

  6. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    Science.gov (United States)

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  7. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    Science.gov (United States)

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  8. Parallelized implicit propagators for the finite-difference Schrödinger equation

    Science.gov (United States)

    Parker, Jonathan; Taylor, K. T.

    1995-08-01

    We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.

  9. Artificial intelligence approaches to astronomical observation scheduling

    Science.gov (United States)

    Johnston, Mark D.; Miller, Glenn

    1988-01-01

    Automated scheduling will play an increasing role in future ground- and space-based observatory operations. Due to the complexity of the problem, artificial intelligence technology currently offers the greatest potential for the development of scheduling tools with sufficient power and flexibility to handle realistic scheduling situations. Summarized here are the main features of the observatory scheduling problem, how artificial intelligence (AI) techniques can be applied, and recent progress in AI scheduling for Hubble Space Telescope.

  10. Morphology Independent Learning in Modular Robots

    DEFF Research Database (Denmark)

    Christensen, David Johan; Bordignon, Mirko; Schultz, Ulrik Pagh

    2009-01-01

    Hand-coding locomotion controllers for modular robots is difficult due to their polymorphic nature. Instead, we propose to use a simple and distributed reinforcement learning strategy. ATRON modules with identical controllers can be assembled in any configuration. To optimize the robot’s locomotion...... speed its modules independently and in parallel adjust their behavior based on a single global reward signal. In simulation, we study the learning strategy’s performance on different robot configurations. On the physical platform, we perform learning experiments with ATRON robots learning to move as fast...

  11. Locality-Driven Parallel Static Analysis for Power Delivery Networks

    KAUST Repository

    Zeng, Zhiyu

    2011-06-01

    Large VLSI on-chip Power Delivery Networks (PDNs) are challenging to analyze due to the sheer network complexity. In this article, a novel parallel partitioning-based PDN analysis approach is presented. We use the boundary circuit responses of each partition to divide the full grid simulation problem into a set of independent subgrid simulation problems. Instead of solving exact boundary circuit responses, a more efficient scheme is proposed to provide near-exact approximation to the boundary circuit responses by exploiting the spatial locality of the flip-chip-type power grids. This scheme is also used in a block-based iterative error reduction process to achieve fast convergence. Detailed computational cost analysis and performance modeling is carried out to determine the optimal (or near-optimal) number of partitions for parallel implementation. Through the analysis of several large power grids, the proposed approach is shown to have excellent parallel efficiency, fast convergence, and favorable scalability. Our approach can solve a 16-million-node power grid in 18 seconds on an IBM p5-575 processing node with 16 Power5+ processors, which is 18.8X faster than a state-of-the-art direct solver. © 2011 ACM.

  12. Planning and scheduling - A schedule's performance

    International Nuclear Information System (INIS)

    Whitman, N.M.

    1993-01-01

    Planning and scheduling is a process whose time has come to PSI Energy. With an awareness of the challenges ahead, individuals must look for ways to enhance the corporate competitiveness. Working toward this goal means that each individual has to dedicate themselves to this more competitive corporate environment. Being competitive may be defined as the ability of each employee to add value to the corporation's economic well being. The timely and successful implementation of projects greatly enhances competitiveness. Those projects that do not do well often suffer from lack of proper execution - not for lack of talent or strategic vision. Projects are consumers of resources such as cash and people. They produce a return when completed and will generate a better return when properly completed utilizing proven project management techniques. Completing projects on time, within budget and meeting customer expectations is the way a corporation builds it's future. This paper offers suggestions on implementing planning and scheduling and provides a review of results in the form of management reports

  13. Alternative Work Schedules: Definitions

    Science.gov (United States)

    Journal of the College and University Personnel Association, 1977

    1977-01-01

    The term "alternative work schedules" encompasses any variation of the requirement that all permanent employees in an organization or one shift of employees adhere to the same five-day, seven-to-eight-hour schedule. This article defines staggered hours, flexible working hours (flexitour and gliding time), compressed work week, the task system, and…

  14. Planning and Scheduling of Airline Operations

    Directory of Open Access Journals (Sweden)

    İlkay ORHAN

    2010-02-01

    Full Text Available The Turkish Civil Aviation sector has grown at a rate of 53 % between the years 2002-2008 owing to countrywide economical developments and some removed restrictions in the aviation field. Successful international companies in the sector use advanced computer-supported solution methods for their planning and scheduling problems. These methods have been providing significant competitive advantages to those companies. There are four major scheduling and planning problems in the airline sector: flight scheduling, aircraft scheduling, crew scheduling and disruptions management. These aforementioned scheduling and planning problems faced by all airline companies in the airline sector were examined in detail. Studies reveal that companies using the advanced methods might gain significant cost reductions. However, even then, the time required for solving large scale problems may not satisfy the decision quality desired by decision makers. In such cases, using modern decision methods integrated with advanced technologies offer companies an opportunity for significant cost-advantages.

  15. Development of Watch Schedule Using Rules Approach

    Science.gov (United States)

    Jurkevicius, Darius; Vasilecas, Olegas

    The software for schedule creation and optimization solves a difficult, important and practical problem. The proposed solution is an online employee portal where administrator users can create and manage watch schedules and employee requests. Each employee can login with his/her own account and see his/her assignments, manage requests, etc. Employees set as administrators can perform the employee scheduling online, manage requests, etc. This scheduling software allows users not only to see the initial and optimized watch schedule in a simple and understandable form, but also to create special rules and criteria and input their business. The system using rules automatically will generate watch schedule.

  16. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  17. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  18. Technology for planning and scheduling under complex constraints

    Science.gov (United States)

    Alguire, Karen M.; Pedro Gomes, Carla O.

    1997-02-01

    Within the context of law enforcement, several problems fall into the category of planning and scheduling under constraints. Examples include resource and personnel scheduling, and court scheduling. In the case of court scheduling, a schedule must be generated considering available resources, e.g., court rooms and personnel. Additionally, there are constraints on individual court cases, e.g., temporal and spatial, and between different cases, e.g., precedence. Finally, there are overall objectives that the schedule should satisfy such as timely processing of cases and optimal use of court facilities. Manually generating a schedule that satisfies all of the constraints is a very time consuming task. As the number of court cases and constraints increases, this becomes increasingly harder to handle without the assistance of automatic scheduling techniques. This paper describes artificial intelligence (AI) technology that has been used to develop several high performance scheduling applications including a military transportation scheduler, a military in-theater airlift scheduler, and a nuclear power plant outage scheduler. We discuss possible law enforcement applications where we feel the same technology could provide long-term benefits to law enforcement agencies and their operations personnel.

  19. From sequential to parallel programming with patterns

    CERN Document Server

    CERN. Geneva

    2018-01-01

    To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

  20. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  1. NRC comprehensive records disposition schedule

    International Nuclear Information System (INIS)

    1983-05-01

    Effective January 1, 1982, NRC will institute records retention and disposal practives in accordance with the approved Comprehensive Records Disposition Schedule (CRDS). CRDS is comprised of NRC Schedules (NRCS) 1 to 4 which apply to the agency's program or substantive records and General Records Schedules (GRS) 1 to 24 which apply to housekeeping or facilitative records. NRCS-I applies to records common to all or most NRC offices; NRCS-II applies to program records as found in the various offices of the Commission, Atomic Safety and Licensing Board Panel, and the Atomic Safety and Licensing Appeal Panel; NRCS-III applies to records accumulated by the Advisory Committee on Reactor Safeguards; and NRCS-IV applies to records accumulated in the various NRC offices under the Executive Director for Operations. The schedules are assembled functionally/organizationally to facilitate their use. Preceding the records descriptions and disposition instructions for both NRCS and GRS, there are brief statements on the organizational units which accumulate the records in each functional area, and other information regarding the schedules' applicability

  2. A Parallel Processing Algorithm for Remote Sensing Classification

    Science.gov (United States)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  3. Scheduling of hybrid types of machines with two-machine flowshop as the first type and a single machine as the second type

    Science.gov (United States)

    Hsiao, Ming-Chih; Su, Ling-Huey

    2018-02-01

    This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.

  4. Data analysis with the DIANA meta-scheduling approach

    International Nuclear Information System (INIS)

    Anjum, A; McClatchey, R; Willers, I

    2008-01-01

    The concepts, design and evaluation of the Data Intensive and Network Aware (DIANA) meta-scheduling approach for solving the challenges of data analysis being faced by CERN experiments are discussed in this paper. Our results suggest that data analysis can be made robust by employing fault tolerant and decentralized meta-scheduling algorithms supported in our DIANA meta-scheduler. The DIANA meta-scheduler supports data intensive bulk scheduling, is network aware and follows a policy centric meta-scheduling. In this paper, we demonstrate that a decentralized and dynamic meta-scheduling approach is an effective strategy to cope with increasing numbers of users, jobs and datasets. We present 'quality of service' related statistics for physics analysis through the application of a policy centric fair-share scheduling model. The DIANA meta-schedulers create a peer-to-peer hierarchy of schedulers to accomplish resource management that changes with evolving loads and is dynamic and adapts to the volatile nature of the resources

  5. Scheduling lessons learned from the Autonomous Power System

    Science.gov (United States)

    Ringer, Mark J.

    1992-01-01

    The Autonomous Power System (APS) project at NASA LeRC is designed to demonstrate the applications of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The AIPS scheduler was tested within the APS system. This scheduler is able to efficiently assign available power to the requesting activities and share this information with other software agents within the APS system in order to implement the generated schedule. The AIPS scheduler is also able to cooperatively recover from fault situations by rescheduling the affected loads on the Brassboard in conjunction with the APEX FDIR system. AIPS served as a learning tool and an initial scheduling testbed for the integration of FDIR and automated scheduling systems. Many lessons were learned from the AIPS scheduler and are now being integrated into a new scheduler called SCRAP (Scheduler for Continuous Resource Allocation and Planning). This paper will service three purposes: an overview of the AIPS implementation, lessons learned from the AIPS scheduler, and a brief section on how these lessons are being applied to the new SCRAP scheduler.

  6. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    Science.gov (United States)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  7. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  8. Optimization of Hierarchically Scheduled Heterogeneous Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Traian; Pop, Paul; Eles, Petru

    2005-01-01

    We present an approach to the analysis and optimization of heterogeneous distributed embedded systems. The systems are heterogeneous not only in terms of hardware components, but also in terms of communication protocols and scheduling policies. When several scheduling policies share a resource......, they are organized in a hierarchy. In this paper, we address design problems that are characteristic to such hierarchically scheduled systems: assignment of scheduling policies to tasks, mapping of tasks to hardware components, and the scheduling of the activities. We present algorithms for solving these problems....... Our heuristics are able to find schedulable implementations under limited resources, achieving an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  9. Sport Tournament Automated Scheduling System

    OpenAIRE

    Raof R. A. A; Sudin S.; Mahrom N.; Rosli A. N. C

    2018-01-01

    The organizer of sport events often facing problems such as wrong calculations of marks and scores, as well as difficult to create a good and reliable schedule. Most of the time, the issues about the level of integrity of committee members and also issues about errors made by human came into the picture. Therefore, the development of sport tournament automated scheduling system is proposed. The system will be able to automatically generate the tournament schedule as well as automatically calc...

  10. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    Science.gov (United States)

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C; Dehio, Christoph

    2011-02-10

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens

  11. 40 CFR 141.702 - Sampling schedules.

    Science.gov (United States)

    2010-07-01

    ... serving at least 10,000 people must submit their sampling schedule for the initial round of source water... submitting the sampling schedule that EPA approves. (3) Systems serving fewer than 10,000 people must submit... analytical result for a scheduled sampling date due to equipment failure, loss of or damage to the sample...

  12. DEA Sensitivity Analysis for Parallel Production Systems

    Directory of Open Access Journals (Sweden)

    J. Gerami

    2011-06-01

    Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.

  13. Avoidance of Timeout from Response-Independent Food: Effects of Delivery Rate and Quality

    Science.gov (United States)

    Richardson, Joseph V.; Baron, Alan

    2008-01-01

    In three experiments, a rat's lever presses could postpone timeouts from food pellets delivered on response-independent schedules. In Experiment 1, the pellets were delivered at variable-time (VT) rates ranging from VT 0.5 to VT 8 min. Experiment 2 replicated the VT 1 min and VT 8 min conditions of Experiment 1 with new subjects. Finally, subjects…

  14. Prevalence of α(+)-Thalassemia in the Scheduled Tribe and Scheduled Caste Populations of Damoh District in Madhya Pradesh, Central India.

    Science.gov (United States)

    Singh, Mendi P S S; Gupta, Rasik B; Yadav, Rajiv; Sharma, Ravendra K; Shanmugam, Rajasubramaniam

    2016-08-01

    This study was carried out to ascertain the allelic frequency of α(+)-thalassemia (α(+)-thal) in Scheduled caste and scheduled tribe populations of the Damoh district of Madhya Pradesh, India. Random blood samples of Scheduled tribe (267) and Scheduled caste (168), considering the family as a sampling unit, were analyzed for the presence of the -α(3.7) (rightward) (NG_000006.1: g.34164_37967del3804) and -α(4.2) (leftward) (AF221717) deletions. α(+)-Thal was significantly higher in the Scheduled tribals (77.9%) as compared to the scheduled caste population (9.0%). About 58.0% scheduled tribals carried at least one chromosome with the -α(3.7) deletion and 20.0% scheduled tribals carried the -α(4.2) deletion. Frequency for the -α(3.7) allele was 0.487 in the scheduled tribal populations in comparison to 0.021 in scheduled castes. Allelic frequency for -α(4.2) was 0.103 and 0.024, respectively, in the above communities. No Hardy-Weinberg equilibrium for α-thal gene (p population, indicating the presence of selection pressures in favor of α-thal mutation and adaptation.

  15. Routine environmental monitoring schedule, calendar year 1995

    International Nuclear Information System (INIS)

    Schmidt, J.W.; Markes, B.M.; McKinney, S.M.

    1994-12-01

    This document provides Bechtel Hanford, Inc. (BHI) and Westinghouse Hanford Company (WHC) a schedule of monitoring and sampling routines for the Operational Environmental Monitoring (OEM) program during calendar year (CY) 1995. Every attempt will be made to consistently follow this schedule; any deviation from this schedule will be documented by an internal memorandum (DSI) explaining the reason for the deviation. The DSI will be issued by the scheduled performing organization and directed to Near-Field Monitoring. The survey frequencies for particular sites are determined by the technical judgment of Near-Field Monitoring and may depend on the site history, radiological status, use and general conditions. Additional surveys may be requested at irregular frequencies if conditions warrant. All radioactive wastes sites are scheduled to be surveyed at least annually. Any newly discovered wastes sites not documented by this schedule will be included in the revised schedule for CY 1995

  16. A PMBGA to Optimize the Selection of Rules for Job Shop Scheduling Based on the Giffler-Thompson Algorithm

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2012-01-01

    Full Text Available Most existing research on the job shop scheduling problem has been focused on the minimization of makespan (i.e., the completion time of the last job. However, in the fiercely competitive market nowadays, delivery punctuality is more important for maintaining a high service reputation. So in this paper, we aim at solving job shop scheduling problems with the total weighted tardiness objective. Several dispatching rules are adopted in the Giffler-Thompson algorithm for constructing active schedules. It is noticeable that the rule selections for scheduling consecutive operations are not mutually independent but actually interrelated. Under such circumstances, a probabilistic model-building genetic algorithm (PMBGA is proposed to optimize the sequence of selected rules. First, we use Bayesian networks to model the distribution characteristics of high-quality solutions in the population. Then, the new generation of individuals is produced by sampling the established Bayesian network. Finally, some elitist individuals are further improved by a special local search module based on parameter perturbation. The superiority of the proposed approach is verified by extensive computational experiments and comparisons.

  17. Experiences in the parallelization of the discrete ordinates method using OpenMP and MPI

    Energy Technology Data Exchange (ETDEWEB)

    Pautz, A. [TUV Hannover/Sachsen-Anhalt e.V. (Germany); Langenbuch, S. [Gesellschaft fur Anlagen- und Reaktorsicherheit (GRS) mbH (Germany)

    2003-07-01

    The method of Discrete Ordinates is in principle parallelizable to a high degree, since the transport 'mesh sweeps' are mutually independent for all angular directions. However, in the well-known production code Dort such a type of angular domain decomposition has to be done on a spatial line-byline basis, causing the parallelism in the code to be very fine-grained. The construction of scalar fluxes and moments requires a large effort for inter-thread or inter-process communication. We have implemented two different parallelization approaches in Dort: firstly, we have used a shared-memory model suitable for SMP (Symmetric Multiprocessor) machines based on the standard OpenMP. The second approach uses the well-known Message Passing Interface (MPI) to establish communication between parallel processes running in a distributed-memory environment. We investigate the benefits and drawbacks of both models and show first results on performance and scaling behaviour of the parallel Dort code. (authors)

  18. Experiences in the parallelization of the discrete ordinates method using OpenMP and MPI

    International Nuclear Information System (INIS)

    Pautz, A.; Langenbuch, S.

    2003-01-01

    The method of Discrete Ordinates is in principle parallelizable to a high degree, since the transport 'mesh sweeps' are mutually independent for all angular directions. However, in the well-known production code Dort such a type of angular domain decomposition has to be done on a spatial line-byline basis, causing the parallelism in the code to be very fine-grained. The construction of scalar fluxes and moments requires a large effort for inter-thread or inter-process communication. We have implemented two different parallelization approaches in Dort: firstly, we have used a shared-memory model suitable for SMP (Symmetric Multiprocessor) machines based on the standard OpenMP. The second approach uses the well-known Message Passing Interface (MPI) to establish communication between parallel processes running in a distributed-memory environment. We investigate the benefits and drawbacks of both models and show first results on performance and scaling behaviour of the parallel Dort code. (authors)

  19. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  20. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  1. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  2. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    CERN Document Server

    Calafiura, Paolo; The ATLAS collaboration; Seuster, Rolf; Tsulaia, Vakhtang; van Gemmeren, Peter

    2015-01-01

    AthenaMP is a multi-process version of the ATLAS reconstruction and data analysis framework Athena. By leveraging Linux fork and copy-on-write, it allows the sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain confugurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows to run AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the...

  3. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    CERN Document Server

    Calafiura, Paolo; Seuster, Rolf; Tsulaia, Vakhtang; van Gemmeren, Peter

    2015-01-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows to run AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of Ath...

  4. Interactive Dynamic Mission Scheduling for ASCA

    Science.gov (United States)

    Antunes, A.; Nagase, F.; Isobe, T.

    The Japanese X-ray astronomy satellite ASCA (Advanced Satellite for Cosmology and Astrophysics) mission requires scheduling for each 6-month observation phase, further broken down into weekly schedules at a few minutes resolution. Two tools, SPIKE and NEEDLE, written in Lisp and C, use artificial intelligence (AI) techniques combined with a graphic user interface for fast creation and alteration of mission schedules. These programs consider viewing and satellite attitude constraints as well as observer-requested criteria and present an optimized set of solutions for review by the planner. Six-month schedules at 1 day resolution are created for an oversubscribed set of targets by the SPIKE software, originally written for HST and presently being adapted for EUVE, XTE and AXAF. The NEEDLE code creates weekly schedules at 1 min resolution using in-house orbital routines and creates output for processing by the command generation software. Schedule creation on both the long- and short-term scale is rapid, less than 1 day for long-term, and one hour for short-term.

  5. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  6. Automated scheduling and planning from theory to practice

    CERN Document Server

    Ozcan, Ender; Urquhart, Neil

    2013-01-01

      Solving scheduling problems has long presented a challenge for computer scientists and operations researchers. The field continues to expand as researchers and practitioners examine ever more challenging problems and develop automated methods capable of solving them. This book provides 11 case studies in automated scheduling, submitted by leading researchers from across the world. Each case study examines a challenging real-world problem by analysing the problem in detail before investigating how the problem may be solved using state of the art techniques.The areas covered include aircraft scheduling, microprocessor instruction scheduling, sports fixture scheduling, exam scheduling, personnel scheduling and production scheduling.  Problem solving methodologies covered include exact as well as (meta)heuristic approaches, such as local search techniques, linear programming, genetic algorithms and ant colony optimisation.The field of automated scheduling has the potential to impact many aspects of our lives...

  7. Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors

    Science.gov (United States)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2018-04-01

    Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.

  8. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  9. Independent Auditor's Approach to the Concept of Fraud in Accounting Standards

    Directory of Open Access Journals (Sweden)

    Handan Bulca

    2015-07-01

    Full Text Available Control in the field of standards, quality and safety can be increased fraud and error reduction is envisaged. In parallel with these changes, the conditions in the Official Gazette dated 14 March 2014 to companies that provide audit is mandatory. With the implementation of auditing standards and corporate companies will become more transparent. Information users more secure thanks to the information reaching these standards will be able to make healthy decisions. The study also has redefined the concept of independent auditors and the audit process should follow an independent auditor, is also indicated. The focus of the study of the risk of fraud and cheating, Independent Auditing Standards Number 240 constitutes in terms of views.

  10. CERCLA document flow: Compressing the schedule, saving costs, and expediting review at the Savannah River Site

    International Nuclear Information System (INIS)

    Hoffman, W.D.

    1991-01-01

    The purpose of this paper is to convey the logic of the CERCLA document flow including Work Plans, Characterization Studies, Risk Assessments, Remedial Investigations, Feasibility Studies, proposed plans, and Records of Decision. The intent is to show how schedules at the Savannah River Site are being formulated to accomplish work using an observational approach where carefully planned tasks can be initiated early and carried out in parallel. This paper will share specific proactive experience in working with the EPA to expedite projects, begin removal actions, take interim actions, speed document flow, and eliminate unnecessary documents from the review cycle

  11. PRACTICAL IMPLICATIONS OF LOCATION-BASED SCHEDULING

    DEFF Research Database (Denmark)

    Andersson, Niclas; Christensen, Knud

    2007-01-01

    The traditional method for planning, scheduling and controlling activities and resources in construction projects is the CPM-scheduling, which has been the predominant scheduling method since its introduction in the late 1950s. Over the years, CPM has proven to be a very powerful technique...... that will be used in this study. LBS is a scheduling method that rests upon the theories of line-of-balance and which uses the graphic representation of a flowline chart. As such, LBS is adapted for planning and management of workflows and, thus, may provide a solution to the identified shortcomings of CPM. Even...

  12. Environmental surveillance master sampling schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1991-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE). This document contains the planned schedule for routine sample collection for the Surface Environmental Surveillance Project (SESP) and Ground-Water Monitoring Project. The routine sampling plan for the SESP has been revised this year to reflect changing site operations and priorities. Some sampling previously performed at least annually has been reduced in frequency, and some new sampling to be performed at a less than annual frequency has been added. Therefore, the SESP schedule reflects sampling to be conducted in calendar year 1991 as well as future years. The ground-water sampling schedule is for 1991. This schedule is subject to modification during the year in response to changes in Site operation, program requirements, and the nature of the observed results. Operational limitations such as weather, mechanical failures, sample availability, etc., may also require schedule modifications. Changes will be documented in the respective project files, but this plan will not be reissued. The purpose of these monitoring projects is to evaluate levels of radioactive and nonradioactive pollutants in the Hanford evirons

  13. Environmental surveillance master sampling schedule

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, L.E.

    1991-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE). This document contains the planned schedule for routine sample collection for the Surface Environmental Surveillance Project (SESP) and Ground-Water Monitoring Project. The routine sampling plan for the SESP has been revised this year to reflect changing site operations and priorities. Some sampling previously performed at least annually has been reduced in frequency, and some new sampling to be performed at a less than annual frequency has been added. Therefore, the SESP schedule reflects sampling to be conducted in calendar year 1991 as well as future years. The ground-water sampling schedule is for 1991. This schedule is subject to modification during the year in response to changes in Site operation, program requirements, and the nature of the observed results. Operational limitations such as weather, mechanical failures, sample availability, etc., may also require schedule modifications. Changes will be documented in the respective project files, but this plan will not be reissued. The purpose of these monitoring projects is to evaluate levels of radioactive and nonradioactive pollutants in the Hanford evirons.

  14. Parallel performance of the angular versus spatial domain decomposition for discrete ordinates transport methods

    International Nuclear Information System (INIS)

    Fischer, J.W.; Azmy, Y.Y.

    2003-01-01

    A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of

  15. Schedule and staffing of a nuclear power project

    International Nuclear Information System (INIS)

    Polliart, A.J.; Csik, B.

    1977-01-01

    Establishment of construction schedule: a) preliminary construction schedule; b) PERT (Program Evaluation Review Techniques) analytical method; c) identify key milestone target dates; d) inter-action by participants and contribution to support revised construction schedule. - Construction schedule control: a) ability to update and modify construction schedule; b) alternate plans to circumvent restraints (problems); c) critical path activity-controls; d) continuous review and report system. - Updating construction site reports to include: 1) progress, 2) accomplishments, 3) potential problems and alternate plans; b) progress reports on related support services; c) total assessment of participating groups on schedule; d) information required by management for decisions. - Typical causes for delays in project schedule. (orig.) [de

  16. Impact of modules on the ACR construction schedule

    International Nuclear Information System (INIS)

    Choy, Ed; Elgohary, Medhat; Fairclough, Neville; Yu, Stephen; Murayama, Kouichi; Miura, Jun; Kawahata, Junichi

    2003-01-01

    The ACR (Advanced CANDU Reactor), developed by Atomic Energy of Canada Ltd. (AECL), is designed with constructability considerations as a major requirement during all project phases from the concept design stage to the detail design stage. For ACR-700, a project schedule of 48 months has been developed for the nth replicated unit with a 36 month construction period duration from First Concrete to Fuel Load. AECL, recognizing the immense benefit of collective experience, is partnering with Hitachi Ltd in the development of the ACR power plant design. AECL has gained valuable experience in implementing new construction methods at the Qinshan (Phase III) twin unit CANDU 6 plant in China, and Hitachi likewise has enjoyed success in modular construction of ABWRs in Japan. Utilizing these experiences, AECL is developing the ACR nuclear steam plant (NSP) and Hitachi is developing the Turbine Building. An overall construction strategy, which builds on the success of these construction methods from the nuclear power plant developments in China and Japan, has been developed for the ACR. The overall construction strategy comprises the 'Open Top' construction technique using a Very Heavy Lift crane, parallel construction activities, with extensive modularization and prefabrication. Modules and prefabrications are major features of the ACR design, resulting in an excess of 80% of Reactor Building internal work being completed as modules or as very streamlined traditional construction. This paper reviews the ACR construction strategy and provides examples of modules and how they impact on the ACR construction schedule. In conclusion, the ACR-700 is designed using the latest, proven construction methods to achieve a 36 month construction period for the nth replicated unit. (author)

  17. Online Scheduling in Manufacturing A Cumulative Delay Approach

    CERN Document Server

    Suwa, Haruhiko

    2013-01-01

    Online scheduling is recognized as the crucial decision-making process of production control at a phase of “being in production" according to the released shop floor schedule. Online scheduling can be also considered as one of key enablers to realize prompt capable-to-promise as well as available-to-promise to customers along with reducing production lead times under recent globalized competitive markets. Online Scheduling in Manufacturing introduces new approaches to online scheduling based on a concept of cumulative delay. The cumulative delay is regarded as consolidated information of uncertainties under a dynamic environment in manufacturing and can be collected constantly without much effort at any points in time during a schedule execution. In this approach, the cumulative delay of the schedule has the important role of a criterion for making a decision whether or not a schedule revision is carried out. The cumulative delay approach to trigger schedule revisions has the following capabilities for the ...

  18. Limited Preemptive Scheduling in Real-time Systems

    OpenAIRE

    Thekkilakattil, Abhilash

    2016-01-01

    Preemptive and non-preemptive scheduling paradigms typically introduce undesirable side effects when scheduling real-time tasks, mainly in the form of preemption overheads and blocking, that potentially compromise timeliness guarantees. The high preemption overheads in preemptive real-time scheduling may imply high resource utilization, often requiring significant over-provisioning, e.g., pessimistic Worst Case Execution Time (WCET) approximations. Non-preemptive scheduling, on the other hand...

  19. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Effects of response-independent stimuli on fixed-interval and fixed-ratio performance of rats: a model for stressful disruption of cyclical eating patterns.

    Science.gov (United States)

    Reed, Phil

    2011-03-01

    Binge eating is often associated with stress-induced disruption of typical eating patterns. Three experiments were performed with the aim of developing a potential model for this effect by investigating the effect of presenting response-independent stimuli on rats' lever-pressing for food reinforcement during both fixed-interval (FI) and fixed-ratio (FR) schedules of reinforcement. In Experiment 1, a response-independent brief tone (500-ms, 105-dB, broadband, noisy signal, ranging up to 16 kHz, with spectral peaks at 3 and 500 Hz) disrupted the performance on an FI 60-s schedule. Responding with the response-independent tone was more vigorous than in the absence of the tone. This effect was replicated in Experiment 2 using a within-subject design, but no such effect was noted when a light was employed as a disrupter. In Experiment 3, a 500-ms tone, but not a light, had a similar effect on rats' performance on FR schedules. This tone-induced effect may represent a release from response-inhibition produced by an aversive event. The implications of these results for modeling binge eating are discussed.

  1. Clinch River Breeder Reactor Plant Project: construction schedule

    International Nuclear Information System (INIS)

    Purcell, W.J.; Martin, E.M.; Shivley, J.M.

    1982-01-01

    The construction schedule for the Clinch River Breeder Reactor Plant and its evolution are described. The initial schedule basis, changes necessitated by the evaluation of the overall plant design, and constructability improvements that have been effected to assure adherence to the schedule are presented. The schedule structure and hierarchy are discussed, as are tools used to define, develop, and evaluate the schedule

  2. A backtracking algorithm for the stream AND-parallel execution of logic programs

    Energy Technology Data Exchange (ETDEWEB)

    Somogyi, Z.; Ramamohanarao, K.; Vaghani, J. (Univ. of Melbourne, Parkville (Australia))

    1988-06-01

    The authors present the first backtracking algorithm for stream AND-parallel logic programs. It relies on compile-time knowledge of the data flow graph of each clause to let it figure out efficiently which goals to kill or restart when a goal fails. This crucial information, which they derive from mode declarations, was not available at compile-time in any previous stream AND-parallel system. They show that modes can increase the precision of the backtracking algorithm, though their algorithm allows this precision to be traded off against overhead on a procedure-by-procedure and call-by-call basis. The modes also allow their algorithm to handle efficiently programs that manipulate partially instantiated data structures and an important class of programs with circular dependency graphs. On code that does not need backtracking, the efficiency of their algorithm approaches that of the committed-choice languages; on code that does need backtracking its overhead is comparable to that of the independent AND-parallel backtracking algorithms.

  3. The Protein Maker: an automated system for high-throughput parallel purification

    International Nuclear Information System (INIS)

    Smith, Eric R.; Begley, Darren W.; Anderson, Vanessa; Raymond, Amy C.; Haffner, Taryn E.; Robinson, John I.; Edwards, Thomas E.; Duncan, Natalie; Gerdts, Cory J.; Mixon, Mark B.; Nollert, Peter; Staker, Bart L.; Stewart, Lance J.

    2011-01-01

    The Protein Maker instrument addresses a critical bottleneck in structural genomics by allowing automated purification and buffer testing of multiple protein targets in parallel with a single instrument. Here, the use of this instrument to (i) purify multiple influenza-virus proteins in parallel for crystallization trials and (ii) identify optimal lysis-buffer conditions prior to large-scale protein purification is described. The Protein Maker is an automated purification system developed by Emerald BioSystems for high-throughput parallel purification of proteins and antibodies. This instrument allows multiple load, wash and elution buffers to be used in parallel along independent lines for up to 24 individual samples. To demonstrate its utility, its use in the purification of five recombinant PB2 C-terminal domains from various subtypes of the influenza A virus is described. Three of these constructs crystallized and one diffracted X-rays to sufficient resolution for structure determination and deposition in the Protein Data Bank. Methods for screening lysis buffers for a cytochrome P450 from a pathogenic fungus prior to upscaling expression and purification are also described. The Protein Maker has become a valuable asset within the Seattle Structural Genomics Center for Infectious Disease (SSGCID) and hence is a potentially valuable tool for a variety of high-throughput protein-purification applications

  4. Utilization Bound of Non-preemptive Fixed Priority Schedulers

    Science.gov (United States)

    Park, Moonju; Chae, Jinseok

    It is known that the schedulability of a non-preemptive task set with fixed priority can be determined in pseudo-polynomial time. However, since Rate Monotonic scheduling is not optimal for non-preemptive scheduling, the applicability of existing polynomial time tests that provide sufficient schedulability conditions, such as Liu and Layland's bound, is limited. This letter proposes a new sufficient condition for non-preemptive fixed priority scheduling that can be used for any fixed priority assignment scheme. It is also shown that the proposed schedulability test has a tighter utilization bound than existing test methods.

  5. Efficient relaxed-Jacobi smoothers for multigrid on parallel computers

    Science.gov (United States)

    Yang, Xiang; Mittal, Rajat

    2017-03-01

    In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.

  6. Integrated Job Scheduling and Network Routing

    DEFF Research Database (Denmark)

    Gamst, Mette; Pisinger, David

    2013-01-01

    We consider an integrated job scheduling and network routing problem which appears in Grid Computing and production planning. The problem is to schedule a number of jobs at a finite set of machines, such that the overall profit of the executed jobs is maximized. Each job demands a number of resou...... indicate that the algorithm can be used as an actual scheduling algorithm in the Grid or as a tool for analyzing Grid performance when adding extra machines or jobs. © 2012 Wiley Periodicals, Inc.......We consider an integrated job scheduling and network routing problem which appears in Grid Computing and production planning. The problem is to schedule a number of jobs at a finite set of machines, such that the overall profit of the executed jobs is maximized. Each job demands a number...... of resources which must be sent to the executing machine through a network with limited capacity. A job cannot start before all of its resources have arrived at the machine. The scheduling problem is formulated as a Mixed Integer Program (MIP) and proved to be NP-hard. An exact solution approach using Dantzig...

  7. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  8. Estimating exponential scheduling preferences

    DEFF Research Database (Denmark)

    Hjorth, Katrine; Börjesson, Maria; Engelson, Leonid

    2015-01-01

    of car drivers' route and mode choice under uncertain travel times. Our analysis exposes some important methodological issues related to complex non-linear scheduling models: One issue is identifying the point in time where the marginal utility of being at the destination becomes larger than the marginal......Different assumptions about travelers' scheduling preferences yield different measures of the cost of travel time variability. Only few forms of scheduling preferences provide non-trivial measures which are additive over links in transport networks where link travel times are arbitrarily...... utility of being at the origin. Another issue is that models with the exponential marginal utility formulation suffer from empirical identification problems. Though our results are not decisive, they partly support the constant-affine specification, in which the value of travel time variability...

  9. The role of scheduled second TACE in early-stage hepatocellular carcinoma with complete response to initial TACE

    Directory of Open Access Journals (Sweden)

    Jung Hee Kim

    2017-03-01

    Full Text Available Background/Aims We investigated the outcomes of early-stage hepatocellular carcinoma (HCC patients who showed a complete response (CR to initial transarterial chemoembolization (TACE, with a focus on the role of scheduled TACE repetition. Methods A total of 178 patients with early-stage HCC who were initially treated with TACE and showed a CR based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST criteria on one month follow-up computed tomography (CT were analyzed. Among them, 90 patients underwent scheduled repetition of TACE in the absence of viable tumor on CT. Results During a median follow-up period of 4.6 years (range: 0.4-8.8 years, mortality was observed in 71 patients (39.9%. The overall recurrence-free and local recurrence-free survival rates at 1 year were 44.4% and 56.2%. In the multivariable model, scheduled repetition of TACE was an independent factor associated with survival (hazard ratio [95% confidence interval]: 0.56 [0.34-0.93], P=0.025. When stratified using Barcelona clinic liver cancer (BCLC stage, scheduled repetition of TACE was associated with a favorable survival rate in BCLC stage A patients, but not in BCLC 0 patients. Conclusions Scheduled repetition of TACE was associated with better survival for early-stage HCC patients showing a CR after initial TACE, especially in BCLC stage A patients.

  10. Group Elevator Peak Scheduling Based on Robust Optimization Model

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  11. NRC comprehensive records disposition schedule. Revision 3

    International Nuclear Information System (INIS)

    1998-02-01

    Title 44 US Code, ''Public Printing and Documents,'' regulations issued by the General Service Administration (GSA) in 41 CFR Chapter 101, Subchapter B, ''Management and Use of Information and Records,'' and regulations issued by the National Archives and Records Administration (NARA) in 36 CFR Chapter 12, Subchapter B, ''Records Management,'' require each agency to prepare and issue a comprehensive records disposition schedule that contains the NARA approved records disposition schedules for records unique to the agency and contains the NARA's General Records Schedules for records common to several or all agencies. The approved records disposition schedules specify the appropriate duration of retention and the final disposition for records created or maintained by the NRC. NUREG-0910, Rev. 3, contains ''NRC's Comprehensive Records Disposition Schedule,'' and the original authorized approved citation numbers issued by NARA. Rev. 3 incorporates NARA approved changes and additions to the NRC schedules that have been implemented since the last revision dated March, 1992, reflects recent organizational changes implemented at the NRC, and includes the latest version of NARA's General Records Schedule (dated August 1995)

  12. NRC comprehensive records disposition schedule. Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-01

    Title 44 US Code, ``Public Printing and Documents,`` regulations issued by the General Service Administration (GSA) in 41 CFR Chapter 101, Subchapter B, ``Management and Use of Information and Records,`` and regulations issued by the National Archives and Records Administration (NARA) in 36 CFR Chapter 12, Subchapter B, ``Records Management,`` require each agency to prepare and issue a comprehensive records disposition schedule that contains the NARA approved records disposition schedules for records unique to the agency and contains the NARA`s General Records Schedules for records common to several or all agencies. The approved records disposition schedules specify the appropriate duration of retention and the final disposition for records created or maintained by the NRC. NUREG-0910, Rev. 3, contains ``NRC`s Comprehensive Records Disposition Schedule,`` and the original authorized approved citation numbers issued by NARA. Rev. 3 incorporates NARA approved changes and additions to the NRC schedules that have been implemented since the last revision dated March, 1992, reflects recent organizational changes implemented at the NRC, and includes the latest version of NARA`s General Records Schedule (dated August 1995).

  13. Safety, Quality, Schedule: the motto of LS1

    CERN Multimedia

    2013-01-01

    The LHC’s first long shutdown, LS1, is a marathon that began on 16 February and will take us through to the beginning of 2015. Just as Olympic marathon runners have a motto, Citius, Altius, Fortius, so the athletes of LS1 work to the mantra of Safety, Quality, Schedule. Four months into LS1, they have settled into their rhythm, and things are going to plan.   The first task of LS1 was to bring the LHC up to room temperature - this was achieved in just 10 weeks. In parallel, preliminary tests for electrical quality assurance and leaks revealed essentially the level of wear and tear we’d expect after three years of running. One slightly anxious moment came when we looked at the RF fingers – the devices that ensure electrical contact in the beam pipes as they pass from one magnet to the next. Those of you with long memories will recall that before start-up, some of these got damaged at warm-up. The good news today is that with all eight sectors test...

  14. Cure Schedule for Stycast 2651/Catalyst 9.

    Energy Technology Data Exchange (ETDEWEB)

    Kropka, Jamie Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); McCoy, John D. [New Mexico Inst. of Mining and Technology, Socorro, NM (United States)

    2017-11-01

    The Emerson & Cuming technical data sheet (TDS) for Stycast 2651/Catalyst 9 lists three alternate cure schedules for the material, each of which would result in a different state of reaction and different material properties. Here, a cure schedule that attains full reaction of the material is defined. The use of this cure schedule will eliminate variance in material properties due to changes in the cure state of the material, and the cure schedule will serve as the method to make material prior to characterizing properties. The following recommendation uses one of the schedules within the TDS and adds a “post cure” to obtain full reaction.

  15. Job shop scheduling problem with late work criterion

    Science.gov (United States)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.

  16. Evaluation of Selected Resource Allocation and Scheduling Methods in Heterogeneous Many-Core Processors and Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ciznicki Milosz

    2014-12-01

    Full Text Available Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs, can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently assigned to different types of processor units in time taking into account their specific resource requirements. Additionally, one should note that available heterogeneous resources have been designed as general purpose units, however, with many built-in features accelerating specific application operations. In other words, the same algorithm or application functionality can be implemented as a different task for CPU or GPU. Nevertheless, from the perspective of various evaluation criteria, e.g. the total execution time or energy consumption, we may observe completely different results. Therefore, as tasks can be scheduled and managed in many alternative ways on both many-core CPUs or GPUs and consequently have a huge impact on the overall computing resources performance, there are needs for new and improved resource management techniques. In this paper we discuss results achieved during experimental performance studies of selected task scheduling methods in heterogeneous computing systems. Additionally, we present a new architecture for resource allocation and task scheduling library which provides a generic application programming interface at the operating system level for improving scheduling polices taking into account a diversity of tasks and heterogeneous computing resources characteristics.

  17. Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419

    Energy Technology Data Exchange (ETDEWEB)

    Hulett, David T. [Hulett and Associates, LLC (United States); Nosbisch, Michael R. [Project Time and Cost, Inc. (United States)

    2012-07-01

    . - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of

  18. MEDICAL STAFF SCHEDULING USING SIMULATED ANNEALING

    Directory of Open Access Journals (Sweden)

    Ladislav Rosocha

    2015-07-01

    Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.

  19. Practical principles in appointment scheduling

    NARCIS (Netherlands)

    Kuiper, A.; Mandjes, M.

    2015-01-01

    Appointment schedules aim at achieving a proper balance between the conflicting interests of the service provider and her clients: a primary objective of the service provider is to fully utilize her available time, whereas clients want to avoid excessive waiting times. Setting up schedules that

  20. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.