WorldWideScience

Sample records for beowulf parallel workstation

  1. Implementing parallel elliptic solver on a Beowulf cluster

    Directory of Open Access Journals (Sweden)

    Marcin Paprzycki

    1999-12-01

    Full Text Available In a recent paper cite{zara} a parallel direct solver for the linear systems arising from elliptic partial differential equations has been proposed. The aim of this note is to present the initial evaluation of the performance characteristics of this algorithm on Beowulf-type cluster. In this context the performance of PVM and MPI based implementations is compared.

  2. [PVFS 2000: An operational parallel file system for Beowulf

    Science.gov (United States)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  3. Efficient Parallel Engineering Computing on Linux Workstations

    Science.gov (United States)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  4. Parallel Computation of Unsteady Flows on a Network of Workstations

    Science.gov (United States)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  5. CTEx Beowulf cluster for MCNP performance

    International Nuclear Information System (INIS)

    Gonzaga, Roberto N.; Amorim, Aneuri S. de; Balthar, Mario Cesar V.

    2011-01-01

    This work is an introduction to the CTEx Nuclear Defense Department's Beowulf Cluster. Building a Beowulf Cluster is a complex learning process that greatly depends upon your hardware and software requirements. The feasibility and efficiency of performing MCNP5 calculations with a small, heterogeneous computing cluster built in Red Hat's Fedora Linux operating system personal computers (PC) are explored. The performance increases that may be expected with such clusters are estimated for cases that typify general radiation transport calculations. Our results show that the speed increase from additional slave PCs is nearly linear up to 10 processors. The pre compiled parallel binary version of MCNP uses the Message-Passing Interface (MPI) protocol. The use of this pre compiled parallel version of MCNP5 with the MPI protocol on a small, heterogeneous computing cluster built from Red Hat's Fedora Linux operating system PCs is the subject of this work. (author)

  6. The Roots of Beowulf

    Science.gov (United States)

    Fischer, James R.

    2014-01-01

    The first Beowulf Linux commodity cluster was constructed at NASA's Goddard Space Flight Center in 1994 and its origins are a part of the folklore of high-end computing. In fact, the conditions within Goddard that brought the idea into being were shaped by rich historical roots, strategic pressures brought on by the ramp up of the Federal High-Performance Computing and Communications Program, growth of the open software movement, microprocessor performance trends, and the vision of key technologists. This multifaceted story is told here for the first time from the point of view of NASA project management.

  7. Implementation of a cluster Beowulf

    International Nuclear Information System (INIS)

    Victorino Guzman, Jorge Enrique

    2001-01-01

    One of the simulation systems that put a great stress on computational resources and performance are the climatic models, with a high cost of implementation, making difficult its acquisition. An alternative that offers good performance at a reasonable cost is the construction of Cluster Beowulf that allows to emulate the behaviour of a computer with several processors. In the present article we discuss the requirements of hardware for the construction of the Cluster Beowulf, the software resources for the implementation of the model CCM3.6 and the performance of the Cluster Beowulf, of the Group of Investigation in Meteorology at the National University of Colombia, with different number of processors

  8. A parallel solution to the cutting stock problem for a cluster of workstations

    Energy Technology Data Exchange (ETDEWEB)

    Nicklas, L.D.; Atkins, R.W.; Setia, S.V.; Wang, P.Y. [George Mason Univ., Fairfax, VA (United States)

    1996-12-31

    This paper describes the design and implementation of a solution to the constrained 2-D cutting stock problem on a cluster of workstations. The constrained 2-D cutting stock problem is an irregular problem with a dynamically modified global data set and irregular amounts and patterns of communication. A replicated data structure is used for the parallel solution since the ratio of reads to writes is known to be large. Mutual exclusion and consistency are maintained using a token-based lazy consistency mechanism, and a randomized protocol for dynamically balancing the distributed work queue is employed. Speedups are reported for three benchmark problems executed on a cluster of workstations interconnected by a 10 Mbps Ethernet.

  9. Multi-objective optimization algorithms for mixed model assembly line balancing problem with parallel workstations

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2016-12-01

    Full Text Available This paper deals with mixed model assembly line (MMAL balancing problem of type-I. In MMALs several products are made on an assembly line while the similarity of these products is so high. As a result, it is possible to assemble several types of products simultaneously without any additional setup times. The problem has some particular features such as parallel workstations and precedence constraints in dynamic periods in which each period also effects on its next period. The research intends to reduce the number of workstations and maximize the workload smoothness between workstations. Dynamic periods are used to determine all variables in different periods to achieve efficient solutions. A non-dominated sorting genetic algorithm (NSGA-II and multi-objective particle swarm optimization (MOPSO are used to solve the problem. The proposed model is validated with GAMS software for small size problem and the performance of the foregoing algorithms is compared with each other based on some comparison metrics. The NSGA-II outperforms MOPSO with respect to some comparison metrics used in this paper, but in other metrics MOPSO is better than NSGA-II. Finally, conclusion and future research is provided.

  10. Processing large remote sensing image data sets on Beowulf clusters

    Science.gov (United States)

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  11. DSN Beowulf Cluster-Based VLBI Correlator

    Science.gov (United States)

    Rogstad, Stephen P.; Jongeling, Andre P.; Finley, Susan G.; White, Leslie A.; Lanyi, Gabor E.; Clark, John E.; Goodhart, Charles E.

    2009-01-01

    The NASA Deep Space Network (DSN) requires a broadband VLBI (very long baseline interferometry) correlator to process data routinely taken as part of the VLBI source Catalogue Maintenance and Enhancement task (CAT M&E) and the Time and Earth Motion Precision Observations task (TEMPO). The data provided by these measurements are a crucial ingredient in the formation of precision deep-space navigation models. In addition, a VLBI correlator is needed to provide support for other VLBI related activities for both internal and external customers. The JPL VLBI Correlator (JVC) was designed, developed, and delivered to the DSN as a successor to the legacy Block II Correlator. The JVC is a full-capability VLBI correlator that uses software processes running on multiple computers to cross-correlate two-antenna broadband noise data. Components of this new system (see Figure 1) consist of Linux PCs integrated into a Beowulf Cluster, an existing Mark5 data storage system, a RAID array, an existing software correlator package (SoftC) originally developed for Delta DOR Navigation processing, and various custom- developed software processes and scripts. Parallel processing on the JVC is achieved by assigning slave nodes of the Beowulf cluster to process separate scans in parallel until all scans have been processed. Due to the single stream sequential playback of the Mark5 data, some ramp-up time is required before all nodes can have access to required scan data. Core functions of each processing step are accomplished using optimized C programs. The coordination and execution of these programs across the cluster is accomplished using Pearl scripts, PostgreSQL commands, and a handful of miscellaneous system utilities. Mark5 data modules are loaded on Mark5 Data systems playback units, one per station. Data processing is started when the operator scans the Mark5 systems and runs a script that reads various configuration files and then creates an experiment-dependent status database

  12. "Beowulf" : Hollywood seikleb pimedas keskajas / Riho Laurisaar

    Index Scriptorium Estoniae

    Laurisaar, Riho

    2007-01-01

    Anglosaksi eeposest "Beowulf", mille alusel on valminud USA seiklusfilm, millest suur osa on loodud arvutigraafika abil ( stsenarist Neil Gaiman, režissöör Robert Zemeckis, osades Anthony Hopkins, Angelina Jolie, Ray Winstone)

  13. Climate Ocean Modeling on a Beowulf Class System

    Science.gov (United States)

    Cheng, B. N.; Chao, Y.; Wang, P.; Bondarenko, M.

    2000-01-01

    With the growing power and shrinking cost of personal computers. the availability of fast ethernet interconnections, and public domain software packages, it is now possible to combine them to build desktop parallel computers (named Beowulf or PC clusters) at a fraction of what it would cost to buy systems of comparable power front supercomputer companies. This led as to build and assemble our own sys tem. specifically for climate ocean modeling. In this article, we present our experience with such a system, discuss its network performance, and provide some performance comparison data with both HP SPP2000 and Cray T3E for an ocean Model used in present-day oceanographic research.

  14. An efficient, interactive, and parallel system for biomedical volume analysis on a standard workstation

    International Nuclear Information System (INIS)

    Rebuffel, V.; Gonon, G.

    1992-01-01

    A software package is presented that can be employed for any 3D imaging modalities: X-ray tomography, emission tomography, magnetic resonance imaging. This system uses a hierarchical data structure, named Octree, that naturally allows a multi-resolution approach. The well-known problems of such an indeterministic representation, especially the neighbor finding, has been solved. Several algorithms of volume processing have been developed, using these techniques and an optimal data storage for the Octree. A parallel implementation was chosen that is compatible with the constraints of the Octree base and the various algorithms. (authors) 4 refs., 3 figs., 1 tab

  15. Achieving high performance in numerical computations on RISC workstations and parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  16. Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster

    International Nuclear Information System (INIS)

    Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.

    2005-01-01

    Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)

  17. The Beowulf manuscript reconsidered: Reading Beowulf in late Anglo-Saxon England

    Directory of Open Access Journals (Sweden)

    L. Viljoen

    2003-08-01

    Full Text Available This article defines a hypothetical late Anglo-Saxon audience: a multi-layered Christian community with competing ideologies, dialects and mythologies. It discusses how that audience might have received the Anglo-Saxon poem Beowulf. The immediate textual context of the poem constitutes an intertextual microcosm for Beowulf. The five texts in the codex provide interesting clues to the common concerns, conflicts and interests of its audience. The organizing principle for the grouping of this disparate mixture of Christian and secular texts with Beowulf was not a sense of canonicity or the collating of monuments with an aesthetic autonomy from cultural conditions or social production. They were part of the so-called “popular culture” and provide one key to the “meanings” that interested the late Anglo-Saxon audience, who would delight in the poet=s alliteration, rhythms, word-play, irony and understatement, descriptions, aphorisms and evocation of loss and transience. The poem provided cultural, historical and spiritual data and evoked a debate about pertinent moral issues. The monsters, for instance, are symbolic of problems of identity construction and establish a polarity between “us” and the “Other”, but at the same time question such binary thinking. Finally, the poem works towards an audience identity whose values emerge from the struggle within the poem and therefore also encompass the monstrous, the potentially disruptive, the darkness within B that which the poem attempts to repress.

  18. VMware workstation

    CERN Document Server

    van Vugt, Sander

    2013-01-01

    This book is a practical, step-by-step guide to creating and managing virtual machines using VMware Workstation.VMware Workstation: No Experience Necessary is for developers as well as system administrators who want to efficiently set up a test environment .You should have basic networking knowledge, and prior experience with Virtual Machines and VMware Player would be beneficial

  19. Study on High Performance of MPI-Based Parallel FDTD from WorkStation to Super Computer Platform

    Directory of Open Access Journals (Sweden)

    Z. L. He

    2012-01-01

    Full Text Available Parallel FDTD method is applied to analyze the electromagnetic problems of the electrically large targets on super computer. It is well known that the more the number of processors the less computing time consumed. Nevertheless, with the same number of processors, computing efficiency is affected by the scheme of the MPI virtual topology. Then, the influence of different virtual topology schemes on parallel performance of parallel FDTD is studied in detail. The general rules are presented on how to obtain the highest efficiency of parallel FDTD algorithm by optimizing MPI virtual topology. To show the validity of the presented method, several numerical results are given in the later part. Various comparisons are made and some useful conclusions are summarized.

  20. Accelerated 3D-OSEM image reconstruction using a Beowulf PC cluster for pinhole SPECT

    International Nuclear Information System (INIS)

    Zeniya, Tsutomu; Watabe, Hiroshi; Sohlberg, Antti; Iida, Hidehiro

    2007-01-01

    A conventional pinhole single-photon emission computed tomography (SPECT) with a single circular orbit has limitations associated with non-uniform spatial resolution or axial blurring. Recently, we demonstrated that three-dimensional (3D) images with uniform spatial resolution and no blurring can be obtained by complete data acquired using two-circular orbit, combined with the 3D ordered subsets expectation maximization (OSEM) reconstruction method. However, a long computation time is required to obtain the reconstruction image, because of the fact that 3D-OSEM is an iterative method and two-orbit acquisition doubles the size of the projection data. To reduce the long reconstruction time, we parallelized the two-orbit pinhole 3D-OSEM reconstruction process by using a Beowulf personal computer (PC) cluster. The Beowulf PC cluster consists of seven PCs connected to Gbit Ethernet switches. Message passing interface protocol was utilized for parallelizing the reconstruction process. The projection data in a subset are distributed to each PC. The partial image forward-and back-projected in each PC is transferred to all PCs. The current image estimate on each PC is updated after summing the partial images. The performance of parallelization on the PC cluster was evaluated using two independent projection data sets acquired by a pinhole SPECT system with two different circular orbits. Parallelization using the PC cluster improved the reconstruction time with increasing number of PCs. The reconstruction time of 54 min by the single PC was decreased to 10 min when six or seven PCs were used. The speed-up factor was 5.4. The reconstruction image by the PC cluster was virtually identical with that by the single PC. Parallelization of 3D-OSEM reconstruction for pinhole SPECT using the PC cluster can significantly reduce the computation time, whereas its implementation is simple and inexpensive. (author)

  1. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets

    Science.gov (United States)

    Shrimankar, D. D.; Sathe, S. R.

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868

  2. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets.

    Science.gov (United States)

    Shrimankar, D D; Sathe, S R

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.

  3. Parallel solution of the time-dependent Ginzburg-Landau equations and other experiences using BlockComm-Chameleon and PCN on the IBM SP, Intel iPSC/860, and clusters of workstations

    International Nuclear Information System (INIS)

    Coskun, E.

    1995-09-01

    Time-dependent Ginzburg-Landau (TDGL) equations are considered for modeling a thin-film finite size superconductor placed under magnetic field. The problem then leads to the use of so-called natural boundary conditions. Computational domain is partitioned into subdomains and bond variables are used in obtaining the corresponding discrete system of equations. An efficient time-differencing method based on the Forward Euler method is developed. Finally, a variable strength magnetic field resulting in a vortex motion in Type II High T c superconducting films is introduced. The authors tackled the problem using two different state-of-the-art parallel computing tools: BlockComm/Chameleon and PCN. They had access to two high-performance distributed memory supercomputers: the Intel iPSC/860 and IBM SP1. They also tested the codes using, as a parallel computing environment, a cluster of Sun Sparc workstations

  4. Comparison of the temperature and humidity in the anesthetic breathing circuit among different anesthetic workstations: Updated guidelines for reporting parallel group randomized trials.

    Science.gov (United States)

    Choi, Yoon Ji; Min, Sam Hong; Park, Jeong Jun; Cho, Jang Eun; Yoon, Seung Zhoo; Yoon, Suk Min

    2017-06-01

    For patients undergoing general anesthesia, adequate warming and humidification of the inspired gases is very important. The aim of this study was to evaluate the differences in the heat and moisture content of the inspired gases with low-flow anesthesia using 4 different anesthesia machines. The patients were divided into 11 groups according to the anesthesia machine used (Ohmeda, Excel; Avance; Dräger, Cato; and Primus) and the fresh gas flow (FGF) rate (0.5, 1, and 4 L/min). The temperature and absolute humidity of the inspired gas in the inspiratory limbs were measured at 5, 10, 15, 30, 45, 60, 75, 90, 105, and 120 minutes in 9 patients scheduled for total thyroidectomy or cervical spine operation in each group. The anesthesia machines of Excel, Avance, Cato, and Primus did not show statistically significant changes in the inspired gas temperatures over time within each group with various FGFs. They, however, showed statistically significant changes in the absolute humidity of the inspired gas over time within each group with low FGF anesthesia (P humidity of the inspired gas over time within each group with an FGF of 4 L/min (P humidities of the inspired gas for all anesthesia machines were lower than the recommended values. There were statistical differences in the provision of humidity among different anesthesia workstations. The Cato and Primus workstations were superior to Excel and Avance. However, even these were unsatisfactory in humans. Therefore, additional devices that provide inspired gases with adequate heat and humidity are needed for those undergoing general anesthetic procedures.

  5. Engineering workstation: Sensor modeling

    Science.gov (United States)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  6. Incontrare Grendel al cinema. Riscrivere il Beowulf in un altro luogo e in un altro tempo

    Directory of Open Access Journals (Sweden)

    Francesco Giusti

    2011-05-01

    Full Text Available As an epic poem Beowulf is a literary space of encounters, but in comparison to the Classical models, the Iliad and the Odyssey, it does not require the extraneousness of the place where the meeting or the clash happens. The threat is at the door. The Other, the monstrous is close, very close to the human community and is genetically tied with it. This is a point of particular interest on which the Twentieth century attempts to rewrite the Anglo-Saxon poem focus so as to create new possibilities and patterns for the clash, and  to investigate the limits of the human. The paper focuses on two recent movies: Beowulf & Grendel (Gunnarsson, Iceland 2005 e Beowulf (Zemeckis, USA 2007. These movies, notwithstanding the differences in technique, genre and intention, clearly  show some shared trends in the background, beyond the more general influence of the former on the latter. For they both re-read  the old poem as the story of the infraction of a boundary and the cultural encounter between the human community, championed by Beowulf, and that Otherness represented by the monstrous Grendel. If the religious aspect, privileged by the medieval narrator, blurs in the movies, they bring to the surface some inner fears which are latent in the poem and are tied to two dangerous spaces of possible intersection and intermingling: the ethno-anthropological and the psycho-genetic aspects of human life and story. Two fears that belong definitely to contemporary men more than to the medieval world.

  7. A Contemporary Voice Revisits the Past: Seamus Heaney’s Beowulf

    Directory of Open Access Journals (Sweden)

    Silvia Geremia

    2007-03-01

    Full Text Available Heaney’s controversial translation of Beowulf shows characteristics that make it look like an original work: in particular, the presence of Hiberno-English words and some unexpected structural features such as the use of italics, notes and running titles. Some of Heaney’s artistic choices have been brought into question by the Germanic philologists, who reproached him with his lack of fidelity to the original text. Moreover, the insertion of Hiberno-English words, which cause an effect of estrangement on Standard English speakers, was considered by some critics not only an aesthetic choice but a provocative act, a linguistic and political claim recalling the ancient antagonism between the Irish and the English. Yet, from the point of view of Heaney’s theoretical and cultural background, his innovations in his translation of Beowulf appear consistent with his personal notions of poetry and translation. Therefore, his Beowulf can be considered the result of a necessary interaction between translator and original text and be acclaimed in spite of all the criticism.

  8. NET remote workstation

    International Nuclear Information System (INIS)

    Leinemann, K.

    1990-10-01

    The goal of this NET study was to define the functionality of a remote handling workstation and its hardware and software architecture. The remote handling workstation has to fulfill two basic functions: (1) to provide the man-machine interface (MMI), that means the interface to the control system of the maintenance equipment and to the working environment (telepresence) and (2) to provide high level (task level) supporting functions (software tools) during the maintenance work and in the preparation phase. Concerning the man-machine interface, an important module of the remote handling workstation besides the standard components of man-machine interfacing is a module for graphical scene presentation supplementing viewing by TV. The technique of integrated viewing is well known from JET BOOM and TARM control using the GBsim and KISMET software. For integration of equipment dependent MMI functions the remote handling workstation provides a special software module interface. Task level support of the operator is based on (1) spatial (geometric/kinematic) models, (2) remote handling procedure models, and (3) functional models of the equipment. These models and the related simulation modules are used for planning, programming, execution monitoring, and training. The workstation provides an intelligent handbook guiding the operator through planned procedures illustrated by animated graphical sequences. For unplanned situations decision aids are available. A central point of the architectural design was to guarantee a high flexibility with respect to hardware and software. Therefore the remote handling workstation is designed as an open system based on widely accepted standards allowing the stepwise integration of the various modules starting with the basic MMI and the spatial simulation as standard components. (orig./HP) [de

  9. ANL statement of site strategy for computing workstations

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R. (ed.); Boxberger, L.M.; Amiot, L.W.; Bretscher, M.E.; Engert, D.E.; Moszur, F.M.; Mueller, C.J.; O' Brien, D.E.; Schlesselman, C.G.; Troyer, L.J.

    1991-11-01

    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is to develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.

  10. UWGSP6: a diagnostic radiology workstation of the future

    Science.gov (United States)

    Milton, Stuart W.; Han, Sang; Choi, Hyung-Sik; Kim, Yongmin

    1993-06-01

    The Univ. of Washington's Image Computing Systems Lab. (ICSL) has been involved in research into the development of a series of PACS workstations since the middle 1980's. The most recent research, a joint UW-IBM project, attempted to create a diagnostic radiology workstation using an IBM RISC System 6000 (RS6000) computer workstation and the X-Window system. While the results are encouraging, there are inherent limitations in the workstation hardware which prevent it from providing an acceptable level of functionality for diagnostic radiology. Realizing the RS6000 workstation's limitations, a parallel effort was initiated to design a workstation, UWGSP6 (Univ. of Washington Graphics System Processor #6), that provides the required functionality. This paper documents the design of UWGSP6, which not only addresses the requirements for a diagnostic radiology workstation in terms of display resolution, response time, etc., but also includes the processing performance necessary to support key functions needed in the implementation of algorithms for computer-aided diagnosis. The paper includes a description of the workstation architecture, and specifically its image processing subsystem. Verification of the design through hardware simulation is then discussed, and finally, performance of selected algorithms based on detailed simulation is provided.

  11. Communication System Simulation Workstation

    Science.gov (United States)

    1990-01-30

    SIMULATION WORKSTATION Grant # AFOSR-89-0117 Submitted to: DEPARTMENT OF AIR FORCE AIR FORCE OFFICE OF SCIENTIFIC RESEARCH BOLLING AIR FORCE BASE , DC...CORRESPONOENCiA. PAGUETES. CONIIUCE. r ACTUHA. Y CONOCIMIENTO DE EMBAROUES. THIS PURCHASE ORDER [,rccion Cablegralica .1,1 Addrv~s NO MUST APPEAR ON ALL...sub-band decomposition was developed, PKX, based on the modulation of a single prototype filter. This technicde was introduced first by Nassbauner and

  12. Saving the “Undoomed Man” In Beowulf (572b-573

    Directory of Open Access Journals (Sweden)

    Anderson Salena Sampson

    2015-01-01

    Full Text Available The maxim Wyrd oft nereð // unfӕgne eorl, / þonne his ellen deah “Fate often spares an undoomed man when his courage avails” (Beowulf 572b-573 has been likened to “Fortune favors the brave,” with little attention to the word unfӕgne, which is often translated “undoomed”. This comparison between proverbs emphasizes personal agency and suggests a contrast between the proverb in 572b-573 and the maxim Gӕð a wyrd swa hio scel “Goes always fate as it must” (Beowulf 455b, which depicts an inexorable wyrd. This paper presents the history of this view and argues that linguistic analysis and further attention to Germanic cognates of (unfӕge reveal a proverb that harmonizes with 455b. (Unfӕge and its cognates have meanings related to being brave or cowardly, blessed or accursed, and doomed or undoomed. A similar Old Norse proverb also speaks to the significance of the status of unfӕge men. Furthermore, the prenominal position of unfӕgne is argued to represent a characterizing property of the man. The word unfӕgne is essential to the meaning of this proverb as it indicates not the simple absence of being doomed but the presence of a more complex quality. This interpretive point is significant in that it provides more information about the portrayal of wyrd in Beowulf by clarifying a well-known proverb in the text; it also has implications for future translations of these verses.

  13. Virtual interface environment workstations

    Science.gov (United States)

    Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.

  14. The Temple Translator's Workstation Project

    National Research Council Canada - National Science Library

    Vanni, Michelle; Zajac, Remi

    1996-01-01

    .... The Temple Translator's Workstation is incorporated into a Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machine- translation...

  15. Physics analysis workstation

    International Nuclear Information System (INIS)

    Johnstad, H.

    1989-06-01

    The Physics Analysis Workstation (PAW) is a high-level program providing data presentation and statistical or mathematical analysis. PAW has been developed at CERN as an instrument to assist physicists in the analysis and presentation of their data. The program is interfaced to a high level graphics package, based on basic underlying graphics. 3-D graphics capabilities are being implemented. The major objects in PAW are 1 or 2 dimensional binned event data with fixed number of entries per event, vectors, functions, graphics pictures, and macros. Command input is handled by an integrated user interface package, which allows for a variety of choices for input, either with typed commands, or in a tree structure menu driven mode. 6 refs., 1 fig

  16. VAX Professional Workstation goes graphic

    International Nuclear Information System (INIS)

    Downward, J.G.

    1984-01-01

    The VAX Professional Workstation (VPW) is a collection of programs and procedures designed to provide an integrated work-station environment for the staff at KMS Fusion's research laboratories. During the past year numerous capabilities have been added to VPW, including support for VT125/VT240/4014 graphic workstations, editing windows, and additional desk utilities. Graphics workstation support allows users to create, edit, and modify graph data files, enter the data via a graphic tablet, create simple plots with DATATRIEVE or DECgraph on ReGIS terminals, or elaborate plots with TEKGRAPH on ReGIS or Tektronix terminals. Users may assign display error bars to the data and interactively plot it in a variety of ways. Users also can create and display viewgraphs. Hard copy output for a large network of office terminals is obtained by multiplexing each terminal's video output into a recently developed video multiplexer front ending a single channel video hard copy unit

  17. Workstations studies and radiation protection

    International Nuclear Information System (INIS)

    Lahaye, T.; Donadille, L.; Rehel, J.L.; Paquet, F.; Beneli, C.; Cordoliani, Y.S.; Vrigneaud, J.M.; Gauron, C.; Petrequin, A.; Frison, D.; Jeannin, B.; Charles, D.; Carballeda, G.; Crouail, P.; Valot, C.

    2006-01-01

    This day on the workstations studies for the workers follow-up, was organised by the research and health section. Devoted to the company doctors, for the competent persons in radiation protection, for the engineers of safety, it presented examples of methodologies and applications in the medical, industrial domain and the research, so contributing to a better understanding and an application of regulatory measures. The analysis of the workstation has to allow a reduction of the exposures and the risks and lead to the optimization of the medical follow-up. The agenda of this day included the different subjects as follow: evolution of the regulation in matter of demarcation of the regulated zones where the measures of workers protection are strengthened; presentation of the I.R.S.N. guide of help to the realization of a workstation study; implementation of a workstation study: case of radiology; the workstation studies in the research area; Is it necessary to impose the operational dosimetry in the services of radiodiagnostic? The experience feedback of a competent person in radiation protection (P.C.R.) in a hospital environment; radiation protection: elaboration of a good practices guide in medical field; the activities file in nuclear power plant: an evaluation tool of risks for the prevention. Methodological presentation and examples; insulated workstation study; the experience feedback of a provider; Contribution of the ergonomics to the determiners characterization in the ionizing radiation exposure situations;The workstations studies for the internal contamination in the fuel cycle facilities and the consideration of the results in the medical follow-up; R.E.L.I.R. necessity of workstation studies; the consideration of the human factor. (N.C.)

  18. A Massively Parallel Code for Polarization Calculations

    Science.gov (United States)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  19. Workstations take over conceptual design

    Science.gov (United States)

    Kidwell, George H.

    1987-01-01

    Workstations provide sufficient computing memory and speed for early evaluations of aircraft design alternatives to identify those worthy of further study. It is recommended that the programming of such machines permit integrated calculations of the configuration and performance analysis of new concepts, along with the capability of changing up to 100 variables at a time and swiftly viewing the results. Computations can be augmented through links to mainframes and supercomputers. Programming, particularly debugging operations, are enhanced by the capability of working with one program line at a time and having available on-screen error indices. Workstation networks permit on-line communication among users and with persons and computers outside the facility. Application of the capabilities is illustrated through a description of NASA-Ames design efforts for an oblique wing for a jet performed on a MicroVAX network.

  20. Diagnostic image workstations ofr PACS

    International Nuclear Information System (INIS)

    Meyer-Ebrecht, D.; Fasel, B.; Dahm, M.; Kaupp, A.; Schilling, R.

    1990-01-01

    Image workstations will be the 'window' to the complex infrastructure of PACS with its intertwined image modalities (image sources, image data bases and image processing devices) and data processing modalities (patient data bases, departmental and hospital information systems). They will serve for user-to-system dialogues, image display and local processing of data as well as images. Their hardware and software structures have to be optimized towards an efficient throughput and processing of image data. (author). 10 refs

  1. Assessment of a cooperative workstation.

    OpenAIRE

    Beuscart, R. J.; Molenda, S.; Souf, N.; Foucher, C.; Beuscart-Zephir, M. C.

    1996-01-01

    Groupware and new Information Technologies have now made it possible for people in different places to work together in synchronous cooperation. Very often, designers of this new type of software are not provided with a model of the common workspace, which is prejudicial to software development and its acceptance by potential users. The authors take the example of a task of medical co-diagnosis, using a multi-media communication workstation. Synchronous cooperative work is made possible by us...

  2. Development of PSA workstation KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    Advanced Research Group of Korea Atomic Energy Research Institute has been developing the Probabilistic Safety Assessment(PSA) workstation KIRAP from 1992. This report describes the recent development activities of PSA workstation KIRAP. The first is to develop and improve the methodologies for PSA quantification, that are the incorporation of fault tree modularization technique, the improvement of cut set generation method, the development of rule-based recovery, the development of methodology to solve a fault tree which has the logical loops and to handle a fault tree which has several initiators. These methodologies are incorporated in the PSA quantification software KIRAP-CUT. The second is to convert PSA modeling softwares for Windows, which have been used on the DOS environment since 1987. The developed softwares are the fault tree editor KWTREE, the event tree editor CONPAS, and Data manager KWDBMAN for event data and common cause failure (CCF) data. With the development of PSA workstation, it makes PSA modeling and PSA quantification and automation easier and faster. (author). 8 refs.

  3. Development of PSA workstation KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    Advanced Research Group of Korea Atomic Energy Research Institute has been developing the Probabilistic Safety Assessment(PSA) workstation KIRAP from 1992. This report describes the recent development activities of PSA workstation KIRAP. The first is to develop and improve the methodologies for PSA quantification, that are the incorporation of fault tree modularization technique, the improvement of cut set generation method, the development of rule-based recovery, the development of methodology to solve a fault tree which has the logical loops and to handle a fault tree which has several initiators. These methodologies are incorporated in the PSA quantification software KIRAP-CUT. The second is to convert PSA modeling softwares for Windows, which have been used on the DOS environment since 1987. The developed softwares are the fault tree editor KWTREE, the event tree editor CONPAS, and Data manager KWDBMAN for event data and common cause failure (CCF) data. With the development of PSA workstation, it makes PSA modeling and PSA quantification and automation easier and faster. (author). 8 refs

  4. SCWEB, Scientific Workstation Evaluation Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)

    1988-06-16

    1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program

  5. Nuclear plant analyzer desktop workstation

    International Nuclear Information System (INIS)

    Beelman, R.J.

    1990-01-01

    In 1983 the U.S. Nuclear Regulatory Commission (USNRC) commissioned the Idaho National Engineering Laboratory (INEL) to develop a Nuclear Plant Analyzer (NPA). The NPA was envisioned as a graphical aid to assist reactor safety analysts in comprehending the results of thermal-hydraulic code calculations. The development was to proceed in three distinct phases culminating in a desktop reactor safety workstation. The desktop NPA is now complete. The desktop NPA is a microcomputer based reactor transient simulation, visualization and analysis tool developed at INEL to assist an analyst in evaluating the transient behavior of nuclear power plants by means of graphic displays. The NPA desktop workstation integrates advanced reactor simulation codes with online computer graphics allowing reactor plant transient simulation and graphical presentation of results. The graphics software, written exclusively in ANSI standard C and FORTRAN 77 and implemented over the UNIX/X-windows operating environment, is modular and is designed to interface to the NRC's suite of advanced thermal-hydraulic codes to the extent allowed by that code. Currently, full, interactive, desktop NPA capabilities are realized only with RELAP5

  6. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  7. Office ergonomics: deficiencies in computer workstation design.

    Science.gov (United States)

    Shikdar, Ashraf A; Al-Kindi, Mahmoud A

    2007-01-01

    The objective of this research was to study and identify ergonomic deficiencies in computer workstation design in typical offices. Physical measurements and a questionnaire were used to study 40 workstations. Major ergonomic deficiencies were found in physical design and layout of the workstations, employee postures, work practices, and training. The consequences in terms of user health and other problems were significant. Forty-five percent of the employees used nonadjustable chairs, 48% of computers faced windows, 90% of the employees used computers more than 4 hrs/day, 45% of the employees adopted bent and unsupported back postures, and 20% used office tables for computers. Major problems reported were eyestrain (58%), shoulder pain (45%), back pain (43%), arm pain (35%), wrist pain (30%), and neck pain (30%). These results indicated serious ergonomic deficiencies in office computer workstation design, layout, and usage. Strategies to reduce or eliminate ergonomic deficiencies in computer workstation design were suggested.

  8. Embedding knowledge in a workstation

    Energy Technology Data Exchange (ETDEWEB)

    Barber, G

    1982-01-01

    This paper describes an approach to supporting work in the office. Using and extending ideas from the field of artificial intelligence (AI) it describes office work as a problem solving activity. A knowledge embedding language called OMEGA is used to embed knowledge of the organization into an office worker's workstation in order to support the office worker in his or her problem solving. A particular approach to reasoning about change and contradiction is discussed. This approach uses OMEGA's viewpoint mechanism. OMEGA's viewpoint mechanism is a general contradiction handling facility. Unlike other knowledge representation systems, when a contradiction is reached the reasons for the contradiction can be analyzed by the reduction mechanism without having to resort to a backtracking mechanism. The viewpoint mechanism is the heart of the problem solving support paradigm. This paradigm is an alternative to the classical view of problem solving in AI. Office workers are supported using the problem solving support paradigm. 16 references.

  9. Assessment of a cooperative workstation.

    Science.gov (United States)

    Beuscart, R J; Molenda, S; Souf, N; Foucher, C; Beuscart-Zephir, M C

    1996-01-01

    Groupware and new Information Technologies have now made it possible for people in different places to work together in synchronous cooperation. Very often, designers of this new type of software are not provided with a model of the common workspace, which is prejudicial to software development and its acceptance by potential users. The authors take the example of a task of medical co-diagnosis, using a multi-media communication workstation. Synchronous cooperative work is made possible by using local ETHERNET or public ISDN Networks. A detailed ergonomic task analysis studies the cognitive functioning of the physicians involved, compares their behaviour in the normal and the mediatized situations, and leads to an interpretation of the likely causes for success or failure of CSCW tools.

  10. Control of a pulse height analyzer using an RDX workstation

    International Nuclear Information System (INIS)

    Montelongo, S.; Hunt, D.N.

    1984-12-01

    The Nuclear Chemistry Division of Lawrence Livermore National laboratory is in the midst of upgrading its radiation counting facilities to automate data acquisition and quality control. This upgrade requires control of a pulse height analyzer (PHA) from an interactive LSI-11/23 workstation running RSX-11M. The PHA is a micro-computer based multichannel analyzer system providing data acquisition, storage, display, manipulation and input/output from up to four independent acquisition interfaces. Control of the analyzer includes reading and writing energy spectra, issuing commands, and servicing device interrupts. The analyzer communicates to the host system over a 9600-baud serial line using the Digital Data Communications link level Protocol (DDCMP). We relieved the RSX workstation CPU from the DDCMP overhead by implementing a DEC compatible in-house designed DMA serial line board (the ISL-11) to communicate with the analyzer. An RSX I/O device driver was written to complete the path between the analyzer and the RSX system by providing the link between the communication board and an application task. The I/O driver is written to handle several ISL-11 cards all operating in parallel thus providing support for control of multiple analyzers from a single workstation. The RSX device driver, its design and use by application code controlling the analyzer, and its operating environment will be discussed

  11. Performance assessment of advanced engineering workstations for fuel management applications

    International Nuclear Information System (INIS)

    Turinsky, P.J.

    1989-07-01

    The purpose of this project was to assess the performance of an advanced engineering workstation [AEW] with regard to applications to incore fuel management for LWRs. The attributes of most interest to us that define an AEW are parallel computational hardware and graphics capabilities. The AEWs employed were super microcomputers manufactured by MASSCOMP, Inc. These computers utilize a 32-bit architecture, graphics co-processor, multi-CPUs [up to six] attached to common memory and multi-vector accelerators. 7 refs., 33 figs., 4 tabs

  12. Parallel Evolutionary Optimization for Neuromorphic Network Training

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Disney, Adam [University of Tennessee (UT); Singh, Susheela [North Carolina State University (NCSU), Raleigh; Bruer, Grant [University of Tennessee (UT); Mitchell, John Parker [University of Tennessee (UT); Klibisz, Aleksander [University of Tennessee (UT); Plank, James [University of Tennessee (UT)

    2016-01-01

    One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impact the performance of EO on Titan, the U.S.'s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pi's. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.

  13. Zoning and workstation analysis in interventional cardiology

    International Nuclear Information System (INIS)

    Degrange, J.P.

    2009-01-01

    As interventional cardiology can induce high doses not only for patients but also for the personnel, the delimitation of regulated areas (or zoning) and workstation analysis (dosimetry) are very important in terms of radioprotection. This paper briefly recalls methods and tools for the different steps to perform zoning and workstation analysis. It outlines the peculiarities of interventional cardiology, presents methods and tools adapted to interventional cardiology, and then discusses the same issues but for workstation analysis. It also outlines specific problems which can be met, and their possible adapted solutions

  14. [PACS-based endoscope image acquisition workstation].

    Science.gov (United States)

    Liu, J B; Zhuang, T G

    2001-01-01

    A practical PACS-based Endoscope Image Acquisition Workstation is here introduced. By a Multimedia Video Card, the endoscope video is digitized and captured dynamically or statically into computer. This workstation realizes a variety of functions such as the endoscope video's acquisition and display, as well as the editing, processing, managing, storage, printing, communication of related information. Together with other medical image workstation, it can make up the image sources of PACS for hospitals. In addition, it can also act as an independent endoscopy diagnostic system.

  15. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  16. EPRI engineering workstation software - Discussion and demonstration

    International Nuclear Information System (INIS)

    Stewart, R.P.; Peterson, C.E.; Agee, L.J.

    1992-01-01

    Computing technology is undergoing significant changes with respect to engineering applications in the electric utility industry. These changes result mainly from the introduction of several UNIX workstations that provide mainframe calculational capability at much lower costs. The workstations are being coupled with microcomputers through local area networks to provide engineering groups with a powerful and versatile analysis capability. PEGASYS, the Professional Engineering Graphic Analysis System, is a software package for use with engineering analysis codes executing in a workstation environment. PEGASYS has a menu driven, user-friendly interface to provide pre-execution support for preparing unput and graphical packages for post-execution analysis and on-line monitoring capability for engineering codes. The initial application of this software is for use with RETRAN-02 operating on an IBM RS/6000 workstation using X-Windows/UNIX and a personal computer under DOS

  17. ERGONOMICs IN THE COMPUTER WORKsTATION

    African Journals Online (AJOL)

    2010-09-09

    Sep 9, 2010 ... in relation to their work environment and working surroundings. ... prolonged computer usage and application of ergonomics in the workstation. Design:One hundred and .... Occupational Health and Safety Services should.

  18. A Next Generation BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin; Tauro, Sandeep

    2011-01-01

    We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials.......We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials....

  19. Ergonomic Evaluations of Microgravity Workstations

    Science.gov (United States)

    Whitmore, Mihriban; Berman, Andrea H.; Byerly, Diane

    1996-01-01

    Various gloveboxes (GBXs) have been used aboard the Shuttle and ISS. Though the overall technical specifications are similar, each GBX's crew interface is unique. JSC conducted a series of ergonomic evaluations of the various glovebox designs to identify human factors requirements for new designs to provide operator commonality across different designs. We conducted 2 0g evaluations aboard the Shuttle to evaluate the material sciences GBX and the General Purpose Workstation (GPWS), and a KC-135 evaluation to compare combinations of arm hole interfaces and foot restraints (flexible arm holes were better than rigid ports for repetitive fine manipulation tasks). Posture analysis revealed that the smallest and tallest subjects assumed similar postures at all four configurations, suggesting that problematic postures are not necessarily a function of the operator s height but a function of the task characteristics. There was concern that the subjects were using the restrictive nature of the GBX s cuffs as an upper-body restraint to achieve such high forces, which might lead to neck/shoulder discomfort. EMG data revealed more consistent muscle performance at the GBX; the variability in the EMG profiles observed at the GPWS was attributed to the subjects attempts to provide more stabilization for themselves in the loose, flexible gauntlets. Tests revealed that the GBX should be designed for a 95 percentile American male to accommodate a neutral working posture. In addition, the foot restraint with knee support appeared beneficial for GBX operations. Crew comments were to provide 2 foot restraint mechanical modes, loose and lock-down, to accommodate a wide range of tasks without egressing the restraint system. Thus far, we have developed preliminary design guidelines for GBXs and foot.

  20. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  1. Integrated telemedicine workstation for intercontinental grand rounds

    Science.gov (United States)

    Willis, Charles E.; Leckie, Robert G.; Brink, Linda; Goeringer, Fred

    1995-04-01

    The Telemedicine Spacebridge to Moscow was a series of intercontinental sessions sponsored jointly by NASA and the Moscow Academy of Medicine. To improve the quality of medical images presented, the MDIS Project developed a workstation for acquisition, storage, and interactive display of radiology and pathology images. The workstation was based on a Macintosh IIfx platform with a laser digitizer for radiographs and video capture capability for microscope images. Images were transmitted via the Russian Lyoutch Satellite which had only a single video channel available and no high speed data channels. Two workstations were configured -- one for use at the Uniformed Services University of Health Sciences in Bethesda, MD. and the other for use at the Hospital of the Interior in Moscow, Russia. The two workstations were used may times during 16 sessions. As clinicians used the systems, we modified the original configuration to improve interactive use. This project demonstrated that numerous acquisition and output devices could be brought together in a single interactive workstation. The video images were satisfactory for remote consultation in a grand rounds format.

  2. "Leodum Lidost on Lofgeornost". La poesía épica de "Beowulf" en nuevos formatos gráficos y visuales

    OpenAIRE

    Bueno Alonso, Jorge Luis

    2007-01-01

    The new formats we have nowadays for the transmission of knowledge are heavily modifying our relationship with the products of our culture. They al so give us new possibilities to deal with literary texts, which constitutes a very important step in the transmission of medieval literature through popular culture. In its age, Beowulf entertained the audience of the meadhall. lt was the best-seller of the day, the successful potboiler movie of Anglo-Saxon England. In our time its story of men an...

  3. Advanced Satellite Workstation - An integrated workstation environment for operational support of satellite system planning and analysis

    Science.gov (United States)

    Hamilton, Marvin J.; Sutton, Stewart A.

    A prototype integrated environment, the Advanced Satellite Workstation (ASW), which was developed and delivered for evaluation and operator feedback in an operational satellite control center, is described. The current ASW hardware consists of a Sun Workstation and Macintosh II Workstation connected via an ethernet Network Hardware and Software, Laser Disk System, Optical Storage System, and Telemetry Data File Interface. The central objective of ASW is to provide an intelligent decision support and training environment for operator/analysis of complex systems such as satellites. Compared to the many recent workstation implementations that incorporate graphical telemetry displays and expert systems, ASW provides a considerably broader look at intelligent, integrated environments for decision support, based on the premise that the central features of such an environment are intelligent data access and integrated toolsets.

  4. The role of the mainframe terminated : mainframe versus workstation

    CERN Document Server

    Williams, D O

    1991-01-01

    I. What mainframes? - The surgeon-general has determined that you shall treat all costs with care ( continental effects, discounts assumed, next month's or last month's prices, optimism of the reporter. II. Typical mainframe hardware III. Typical mainframe software IV. What workstations? VI. Typical workstation hardware VII. Typical workstation software VIII. Titan vs PDP-7s XIX.Historic answer X. Amdahl's Law....

  5. The concepts and functions of a FEM workstation

    International Nuclear Information System (INIS)

    Brown, R.R.; Gloudeman, J.F.

    1982-01-01

    Recent advances in microprocessor-based computer hardware and associated software provide a basis for the development of a FEM workstation. The key requirements for such a workstation are reviewed and the recent hardware and software developments are discussed that make such a workstation both technically and economically feasible at this time. (orig.)

  6. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  7. A nuclear power plant system engineering workstation

    International Nuclear Information System (INIS)

    Mason, J.H.; Crosby, J.W.

    1989-01-01

    System engineers offer an approach for effective technical support for operation and maintenance of nuclear power plants. System engineer groups are being set up by most utilities in the United States. Institute of Nuclear Power operations (INPO) and U.S. Nuclear Regulatory Commission (NRC) have endorsed the concept. The INPO Good Practice and a survey of system engineer programs in the southeastern United States provide descriptions of system engineering programs. The purpose of this paper is to describe a process for developing a design for a department-level information network of workstations for system engineering groups. The process includes the following: (1) application of a formal information engineering methodology, (2) analysis of system engineer functions and activities; (3) use of Electric Power Research Institute (EPRI) Plant Information Network (PIN) data; (4) application of the Information Engineering Workbench. The resulting design for this system engineer workstation can provide a reference for design of plant-specific systems

  8. Next Genertation BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Bañas, Andrew Rafael; Palima, Darwin; Tauro, Sandeep

    We will outline the specs of our Biophotonics Workstation that can generate up to 100 reconfigurable laser-traps making 3D real-time optical manipulation of advanced structures, cells or tiny particles possible with the use of joysticks or gaming devices. Optically actuated nanoneedles may...... be functionalized or directly used to perforate targeted cells at specific locations or force the complete separation of dividing cells, among other functions that can be very useful for microbiologists or biomedical researchers....

  9. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  10. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  11. EL CLUSTER BEOWULF DEL CENTRO NACIONAL DE BIOINFORMÁTICA: DISEÑO, MONTAJE Y EVALUACIÓN PRELIMINAR

    Directory of Open Access Journals (Sweden)

    Juan Pedro Febles Rodríguez

    2003-12-01

    Full Text Available

    La utilización de

     

    cluster de computadoras en diferentes campos de investigación que requieren cálculos masivos se ha incrementado en los últimos años desde que Becker y Sterling construyeron el primer cluster Beowulf en 1994. En este artículo se describe el diseño -desde la selección de los componentes-, montaje y evaluación del cluster. Con respecto a los dos primeros aspectos, la explicación se limita a la descripción de la arquitectura de hardware y software del cluster. Para la evaluación del desempeño del cluster se utilizan varios programas benchmark y se comparan los resultados con los de otro cluster similar al tratado. Finalmente, se discuten las posibles causas de las diferencias observadas y se propone

  12. Iterative solution of general sparse linear systems on clusters of workstations

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Gen-Ching; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.

  13. An open architecture for medical image workstation

    Science.gov (United States)

    Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun

    2005-04-01

    Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.

  14. Visual observation of digitalised signals by workstations

    International Nuclear Information System (INIS)

    Navratil, J.; Akiyama, A.; Mimashi, T.

    1994-01-01

    The idea to have on-line information about the behavior of betatron tune, as a first step to the future automatic control of TRISTAN accelerator tune, appeared near the end of 1991. At the same time, other suggestions concerning a rejuvenation of the existing Control System arose and therefore the newly created project ''System for monitoring betatron tune'' (SMBT) started with several goals: - to obtain new on-line information about the beam behavior during the acceleration time, - to test the way of possible extension and replacement of the existing control system of TRISTAN, - to get experience with the workstation and XWindow software

  15. Habitat Demonstration Unit Medical Operations Workstation Upgrades

    Science.gov (United States)

    Trageser, Katherine H.

    2011-01-01

    This paper provides an overview of the design and fabrication associated with upgrades for the Medical Operations Workstation in the Habitat Demonstration Unit. The work spanned a ten week period. The upgrades will be used during the 2011 Desert Research and Technology Studies (Desert RATS) field campaign. Upgrades include a deployable privacy curtain system, a deployable tray table, an easily accessible biological waste container, reorganization and labeling of the medical supplies, and installation of a retractable camera. All of the items were completed within the ten week period.

  16. Videoconferencing using workstations in the ATLAS collaboration

    International Nuclear Information System (INIS)

    Onions, C.; Blokzijl, K. Bos

    1994-01-01

    The ATLAS collaboration consists of about 1000 physicists from close to 100 institutes around the world. This number is expected to grow over the coming years. The authors realized that they needed to do something to allow people to participate in meetings held at CERN without having to travel and hence they started a pilot project in July, 1993 to look into this. Colleagues from Nikhef already had experience of international network meetings (e.g. RIPE) using standard UNIX workstations and public domain software tools using the MBONE, hence they investigated this as a first priority

  17. Studies on radio-diagnosis workstations

    International Nuclear Information System (INIS)

    Niguet, A.

    2008-01-01

    Radio-diagnosis ranges from mammography to interventional radiology, and represents a great majority of medical examinations, and is therefore the main source of exposure for the population. The author gives an overview of methods for workstation assessment, mainly based on the dose-area product. She indicates the factors affecting the radiation quantity, and evokes the influence of the type of examination. Measurements enable workers to be classified, an adapted dosimetry follow-on to be implemented, working areas to be delimited, collective and individual protections to be implemented, and recommendations to be drafted. Results obtained on a cardiologist are presented

  18. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  19. Design of a tritium decontamination workstation based on plasma cleaning

    International Nuclear Information System (INIS)

    Antoniazzi, A.B.; Shmayda, W.T.; Fishbien, B.F.

    1993-01-01

    A design for a tritium decontamination workstation based on plasma cleaning is presented. The activity of tritiated surfaces are significantly reduced through plasma-surface interactions within the workstation. Such a workstation in a tritium environment can routinely be used to decontaminate tritiated tools and components. The main advantage of such a station is the lack of low level tritiated liquid waste. Gaseous tritiated species are the waste products with can with present technology be separated and contained

  20. A RISC/UNIX workstation second stage trigger

    International Nuclear Information System (INIS)

    Foreman, W.M.; Amann, J.F.; Fu, S.; Kozlowski, T.; Naivar, F.J.; Oothoudt, M.A.; Shelley, F.

    1992-01-01

    Recent advances in Reduced Instruction Set Computer (RISC) workstations have greatly altered the economics of processing power available for experiments. In addition VME interfaces available for many of these workstations make it possible to use them in experiment frontends for filtering and compressing data. Such a second stage trigger has been implemented at LAMPF using a commercially available workstation and VME interface. The implementation is described and measurements of data transfer speeds are presented in this paper

  1. From parallel to distributed computing for reactive scattering calculations

    International Nuclear Information System (INIS)

    Lagana, A.; Gervasi, O.; Baraglia, R.

    1994-01-01

    Some reactive scattering codes have been ported on different innovative computer architectures ranging from massively parallel machines to clustered workstations. The porting has required a drastic restructuring of the codes to single out computationally decoupled cpu intensive subsections. The suitability of different theoretical approaches for parallel and distributed computing restructuring is discussed and the efficiency of related algorithms evaluated

  2. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  3. Performance of the coupled thermalhydraulics/neutron kinetics code R/P/C on workstation clusters and multiprocessor systems

    International Nuclear Information System (INIS)

    Hammer, C.; Paffrath, M.; Boeer, R.; Finnemann, H.; Jackson, C.J.

    1996-01-01

    The light water reactor core simulation code PANBOX has been coupled with the transient analysis code RELAP5 for the purpose of performing plant safety analyses with a three-dimensional (3-D) neutron kinetics model. The system has been parallelized to improve the computational efficiency. The paper describes the features of this system with emphasis on performance aspects. Performance results are given for different types of parallelization, i. e. for using an automatic parallelizing compiler, using the portable PVM platform on a workstation cluster, using PVM on a shared memory multiprocessor, and for using machine dependent interfaces. (author)

  4. Binary black holes on a budget: simulations using workstations

    International Nuclear Information System (INIS)

    Marronetti, Pedro; Tichy, Wolfgang; Bruegmann, Bernd; Gonzalez, Jose; Hannam, Mark; Husa, Sascha; Sperhake, Ulrich

    2007-01-01

    Binary black hole simulations have traditionally been computationally very expensive: current simulations are performed in supercomputers involving dozens if not hundreds of processors, thus systematic studies of the parameter space of binary black hole encounters still seem prohibitive with current technology. Here we show how the multi-layered refinement level code BAM can be used on dual processor workstations to simulate certain binary black hole systems. BAM, based on the moving punctures method, provides grid structures composed of boxes of increasing resolution near the centre of the grid. In the case of binaries, the highest resolution boxes are placed around each black hole and they track them in their orbits until the final merger when a single set of levels surrounds the black hole remnant. This is particularly useful when simulating spinning black holes since the gravitational fields gradients are larger. We present simulations of binaries with equal mass black holes with spins parallel to the binary axis and intrinsic magnitude of S/m 2 = 0.75. Our results compare favourably to those of previous simulations of this particular system. We show that the moving punctures method produces stable simulations at maximum spatial resolutions up to M/160 and for durations of up to the equivalent of 20 orbital periods

  5. Modelling of Energy Expenditure at Welding Workstations: Effect of ...

    African Journals Online (AJOL)

    The welding workstation usually generates intense heat during operations, which may affect the welder's health if not properly controlled, and can also affect the performance of the welder at work. Consequently, effort to control the conditions of the welding workstation is essential, and is therefore pursued in this paper.

  6. The biomechanical and physiological effect of two dynamic workstations

    NARCIS (Netherlands)

    Botter, J.; Burford, E.M.; Commissaris, D.; Könemann, R.; Mastrigt, S.H.V.; Ellegast, R.P.

    2013-01-01

    The aim of this research paper was to investigate the effect, both biomechanically and physiologically, of two dynamic workstations currently available on the commercial market. The dynamic workstations tested, namely the Treadmill Desk by LifeSpan and the LifeBalance Station by RightAngle, were

  7. Imaging workstations for computer-aided primatology: promises and pitfalls.

    Science.gov (United States)

    Vannier, M W; Conroy, G C

    1989-01-01

    In this paper, the application of biomedical imaging workstations to primatology will be explained and evaluated. The technological basis, computer hardware and software aspects, and the various uses of several types of workstations will all be discussed. The types of workstations include: (1) Simple - these display-only workstations, which function as electronic light boxes, have applications as terminals to picture archiving and communication (PAC) systems. (2) Diagnostic reporting - image-processing workstations that include the ability to perform straightforward manipulations of gray scale and raw data values will be considered for operations such as histogram equalization (whether adaptive or global), gradient edge finders, contour generation, and region of interest, as well as other related functions. (3) Manipulation systems - three-dimensional modeling and computer graphics with application to radiation therapy treatment planning, and surgical planning and evaluation will be considered. A technology of prime importance in the function of these workstations lies in communications and networking. The hierarchical organization of an electronic computer network and workstation environment with the interrelationship of simple, diagnostic reporting, and manipulation workstations to a coaxial or fiber optic network will be analyzed.

  8. Post-deployment usability evaluation of a radiology workstation

    NARCIS (Netherlands)

    Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi A.; Oudkerk, Matthijs; Van Ooijen, Peter M. A.

    Objectives: To determine the number, nature and severity of usability issues radiologists encounter while using a commercially available radiology workstation in clinical practice, and to assess how well the results of a pre-deployment usability evaluation of this workstation generalize to clinical

  9. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  10. Parallelization of ITOUGH2 using PVM

    International Nuclear Information System (INIS)

    Finsterle, Stefan

    1998-01-01

    ITOUGH2 inversions are computationally intensive because the forward problem must be solved many times to evaluate the objective function for different parameter combinations or to numerically calculate sensitivity coefficients. Most of these forward runs are independent from each other and can therefore be performed in parallel. Message passing based on the Parallel Virtual Machine (PVM) system has been implemented into ITOUGH2 to enable parallel processing of ITOUGH2 jobs on a heterogeneous network of Unix workstations. This report describes the PVM system and its implementation into ITOUGH2. Instructions are given for installing PVM, compiling ITOUGH2-PVM for use on a workstation cluster, the preparation of an 1.TOUGH2 input file under PVM, and the execution of an ITOUGH2-PVM application. Examples are discussed, demonstrating the use of ITOUGH2-PVM

  11. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  12. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  13. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  14. [Design and development of the DSA digital subtraction workstation].

    Science.gov (United States)

    Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo

    2008-05-01

    According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.

  15. A versatile nondestructive evaluation imaging workstation

    Science.gov (United States)

    Chern, E. James; Butler, David W.

    1994-01-01

    Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.

  16. The advanced software development workstation project

    Science.gov (United States)

    Fridge, Ernest M., III; Pitman, Charles L.

    1991-01-01

    The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.

  17. Pc-Based Floating Point Imaging Workstation

    Science.gov (United States)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  18. Workstation Table Engineering Model Design, Development, Fabrication, and Testing

    Science.gov (United States)

    2012-05-01

    This research effort is focused on providing a workstation table design that will reduce the risk of occupant injuries due to secondary impacts and to compartmentalize the occupants to prevent impacts with other objects and/or passengers seated acros...

  19. Criticality codes migration to workstations at the Hanford site

    International Nuclear Information System (INIS)

    Miller, E.M.

    1993-01-01

    Westinghouse Hanford Company, Hanford Site Operations contractor, Richland, Washington, currently runs criticality codes on the Cray X-MP EA/232 computer but has recommended that US Department of Energy DOE-Richland replace the Cray with more economical workstations

  20. Workstations studies and radiation protection; Etudes de postes et radioprotection

    Energy Technology Data Exchange (ETDEWEB)

    Lahaye, T. [Direction des relations du travail, 75 - Paris (France); Donadille, L.; Rehel, J.L.; Paquet, F. [Institut de Radioprotection et de Surete Nucleaire, 92 - Fontenay-aux-Roses (France); Beneli, C. [Paris-5 Univ., 75 (France); Cordoliani, Y.S. [Societe Francaise de Radioprotection, 92 - Fontenay-aux-Roses (France); Vrigneaud, J.M. [Assistance Publique - Hopitaux de Paris, 75 (France); Gauron, C. [Institut National de Recherche et de Securite, 75 - Paris (France); Petrequin, A.; Frison, D. [Association des Medecins du Travail des Salaries du Nucleaire (France); Jeannin, B. [Electricite de France (EDF), 75 - Paris (France); Charles, D. [Polinorsud (France); Carballeda, G. [cabinet Indigo Ergonomie, 33 - Merignac (France); Crouail, P. [Centre d' Etude sur l' Evaluation de la Protection dans le Domaine Nucleaire, 92 - Fontenay-aux-Roses (France); Valot, C. [IMASSA, 91 - Bretigny-sur-Orge (France)

    2006-07-01

    This day on the workstations studies for the workers follow-up, was organised by the research and health section. Devoted to the company doctors, for the competent persons in radiation protection, for the engineers of safety, it presented examples of methodologies and applications in the medical, industrial domain and the research, so contributing to a better understanding and an application of regulatory measures. The analysis of the workstation has to allow a reduction of the exposures and the risks and lead to the optimization of the medical follow-up. The agenda of this day included the different subjects as follow: evolution of the regulation in matter of demarcation of the regulated zones where the measures of workers protection are strengthened; presentation of the I.R.S.N. guide of help to the realization of a workstation study; implementation of a workstation study: case of radiology; the workstation studies in the research area; Is it necessary to impose the operational dosimetry in the services of radiodiagnostic? The experience feedback of a competent person in radiation protection (P.C.R.) in a hospital environment; radiation protection: elaboration of a good practices guide in medical field; the activities file in nuclear power plant: an evaluation tool of risks for the prevention. Methodological presentation and examples; insulated workstation study; the experience feedback of a provider; Contribution of the ergonomics to the determiners characterization in the ionizing radiation exposure situations;The workstations studies for the internal contamination in the fuel cycle facilities and the consideration of the results in the medical follow-up; R.E.L.I.R. necessity of workstation studies; the consideration of the human factor. (N.C.)

  1. Parallelising a molecular dynamics algorithm on a multi-processor workstation

    Science.gov (United States)

    Müller-Plathe, Florian

    1990-12-01

    The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.

  2. Insulation coordination workstation for AC and DC substations

    International Nuclear Information System (INIS)

    Booth, R.R.; Hileman, A.R.

    1990-01-01

    The Insulation Coordination Workstation was designed to aid the substation design engineer in the insulation coordination process. The workstation utilizes state of the art computer technology to present a set of tools necessary for substation insulation coordination, and to support the decision making process for all aspects of insulation coordination. The workstation is currently being developed for personal computers supporting OS/2 Presentation Manager. Modern Computer-Aided Software Engineering (CASE) technology was utilized to create an easily expandable framework which currently consists of four modules, each accessing a central application database. The heart of the workstation is a library of user-friendly application programs for the calculation of important voltage stresses used for the evaluation of insulation coordination. The Oneline Diagram is a graphic interface for data entry into the EPRI distributed EMTP program, which allows the creation of complex systems on the CRT screen using simple mouse clicks and keyboard entries. Station shielding is graphically represented in the Geographic Viewport using a three-dimensional substation model, and the interactive plotting package allows plotting of EPRI EMTP output results on the CRT screen, printer, or pen plotter. The Insulation Coordination Workstation was designed by Advanced Systems Technology (AST), a division of ABB Power Systems, Inc., and sponsored by the Electric Power Research Institute under RP 2323-5, AC/DC Insulation Coordination Workstation

  3. Radiology workstation for mammography: preliminary observations, eyetracker studies, and design

    Science.gov (United States)

    Beard, David V.; Johnston, Richard E.; Pisano, Etta D.; Hemminger, Bradley M.; Pizer, Stephen M.

    1991-07-01

    For the last four years, the UNC FilmPlane project has focused on constructing a radiology workstation facilitating CT interpretations equivalent to those with film and viewbox. Interpretation of multiple CT studies was originally chosen because handling such large numbers of images was considered to be one of the most difficult tasks that could be performed with a workstation. The authors extend the FilmPlane design to address mammography. The high resolution and contrast demands coupled with the number of images often cross- compared make mammography a difficult challenge for the workstation designer. This paper presents the results of preliminary work with workstation interpretation of mammography. Background material is presented to justify why the authors believe electronic mammographic workstations could improve health care delivery. The results of several observation sessions and a preliminary eyetracker study of multiple-study mammography interpretations are described. Finally, tentative conclusions of what a mammographic workstation might look like and how it would meet clinical demand to be effective are presented.

  4. EPRI root cause advisory workstation 'ERCAWS'

    International Nuclear Information System (INIS)

    Singh, A.; Chiu, C.; Hackman, R.B.

    1993-01-01

    EPRI and its contractor FPI International are developing Personal Computer (PC), Microsoft Windows based software to assist power plant engineers and maintenance personnel to diagnose and correct root causes of power plant equipment failures. The EPRI Root Cause Advisory Workstation (ERCAWS) is easy to use and able to handle knowledge bases and diagnostic tools for an unlimited number of equipment types. Knowledge base data is based on power industry experience and root cause analysis from many sources - Utilities, EPRI, US government, FPI, and International sources. The approach used in the knowledge base handling portion of the software is case-study oriented with the engineer selecting the equipment type and symptom identification using a combination of text, photographs, and animation, displaying dynamic physical phenomena involved. Root causes, means for confirmation, and corrective actions are then suggested in a simple, user friendly format. The first knowledge base being released with ERCAWS is the Valve Diagnostic Advisor module; covering six common valve types and some motor operator and air operator items. More modules are under development with Heat Exchanger, Bolt, and Piping modules currently in the beta testing stage. A wide variety of diagnostic tools are easily incorporated into ERCAWS and accessed through the main screen interface. ERCAWS is designed to fulfill the industry need for user-friendly tools to perform power plant equipment failure root cause analysis, and training for engineering, operations and maintenance personnel on how components can fail and how to reduce failure rates or prevent failure from occurring. In addition, ERCAWS serves as a vehicle to capture lessons learned from industry wide experience. (author)

  5. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  6. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  7. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  8. Speed up of MCACE, a Monte Carlo code for evaluation of shielding safety, by parallel computer, (3)

    International Nuclear Information System (INIS)

    Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka; Onodera, Emi; Imawaka, Tsuneyuki; Yoda, Yoshihisa.

    1993-07-01

    The parallel computing of the MCACE code has been studied on two platforms; 1) Shared Memory Type Vector-Parallel Computer Monte-4 and 2) Networked Several Workstations. On the Monte-4, a disk-file has been allocated to collect all results computed by 4 CPUs in parallel, executing the copy of the MCACE code on each CPU. On the workstations under network environment, two parallel models have been evaluated; 1) a host-node model and 2) the model used on the Monte-4 where no software for parallelization has been employed but only standard FORTRAN language. The measurement of computing times has showed that speed up of about 3 times has been achieved by using 4 CPUs of the Monte-4. Further, connecting 4 workstations by network, the computing speed by parallelization has achieved faster than our scalar main frame computer, FACOM M-780. (author)

  9. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  10. Impact of workstations on criticality analyses at ABB combustion engineering

    International Nuclear Information System (INIS)

    Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F.

    1993-01-01

    During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bit word size and function as the computer servers and network administrative CPUS, providing a virtual memory system

  11. The transition of GTDS to the Unix workstation environment

    Science.gov (United States)

    Carter, D.; Metzinger, R.; Proulx, R.; Cefola, P.

    1995-01-01

    Future Flight Dynamics systems should take advantage of the possibilities provided by current and future generations of low-cost, high performance workstation computing environments with Graphical User Interface. The port of the existing mainframe Flight Dynamics systems to the workstation environment offers an economic approach for combining the tremendous engineering heritage that has been encapsulated in these systems with the advantages of the new computing environments. This paper will describe the successful transition of the Draper Laboratory R&D version of GTDS (Goddard Trajectory Determination System) from the IBM Mainframe to the Unix workstation environment. The approach will be a mix of historical timeline notes, descriptions of the technical problems overcome, and descriptions of associated SQA (software quality assurance) issues.

  12. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  13. Real-time on a standard UNIX workstation?

    International Nuclear Information System (INIS)

    Glanzman, T.

    1992-09-01

    This is a report of an ongoing R ampersand D project which is investigating the use of standard UNIX workstations for the real-time data acquisition from a major new experimental initiative, the SLAC B Factory (PEP II). For this work an IBM RS/6000 workstation running the AIX operating system is used. Real-time extensions to the UNIX operating system are explored and performance measured. These extensions comprise a set of AIX-specific and POSIX-compliant system services. Benchmark comparisons are made with embedded processor technologies. Results are presented for a simple prototype on-line system for laboratory-testing of a new prototype drift chamber

  14. Argo workstation: a key component of operational oceanography

    Science.gov (United States)

    Dong, Mingmei; Xu, Shanshan; Miao, Qingsheng; Yue, Xinyang; Lu, Jiawei; Yang, Yang

    2018-02-01

    Operational oceanography requires the quantity, quality, and availability of data set and the timeliness and effectiveness of data products. Without steady and strong operational system supporting, operational oceanography will never be proceeded far. In this paper we describe an integrated platform named Argo Workstation. It operates as a data processing and management system, capable of data collection, automatic data quality control, visualized data check, statistical data search and data service. After it is set up, Argo workstation provides global high quality Argo data to users every day timely and effectively. It has not only played a key role in operational oceanography but also set up an example for operational system.

  15. Helical computed tomography and the workstation: introduction to a symbiosis

    International Nuclear Information System (INIS)

    Garcia-Santos, J.M.

    1997-01-01

    We do a brief introduction to the possibilities of an helical computed tomography system when it is associated with a powerful workstation. The fast and volumetric way of acquisition constitutes, basically, the main advantage of this sort of computed tomography. The anatomical and radio pathological study, in a workstation, of the acquired information (thanks to multiplanar and 3D reconstruction), increases significantly our capacity of analysis in each patient. Only the clinical and radiological experience will tell us which is the right place that this symbiosis occupies within our diagnosis tools. (Author) 11 refs

  16. A Parallel Multigrid Solver for Viscous Flows on Anisotropic Structured Grids

    Science.gov (United States)

    Prieto, Manuel; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    This paper presents an efficient parallel multigrid solver for speeding up the computation of a 3-D model that treats the flow of a viscous fluid over a flat plate. The main interest of this simulation lies in exhibiting some basic difficulties that prevent optimal multigrid efficiencies from being achieved. As the computing platform, we have used Coral, a Beowulf-class system based on Intel Pentium processors and equipped with GigaNet cLAN and switched Fast Ethernet networks. Our study not only examines the scalability of the solver but also includes a performance evaluation of Coral where the investigated solver has been used to compare several of its design choices, namely, the interconnection network (GigaNet versus switched Fast-Ethernet) and the node configuration (dual nodes versus single nodes). As a reference, the performance results have been compared with those obtained with the NAS-MG benchmark.

  17. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  18. Fast 2D FWI on a multi and many-cores workstation.

    Science.gov (United States)

    Thierry, Philippe; Donno, Daniela; Noble, Mark

    2014-05-01

    Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16

  19. A worldwide flock of Condors : load sharing among workstation clusters

    NARCIS (Netherlands)

    Epema, D.H.J.; Livny, M.; Dantzig, van R.; Evers, X.; Pruyne, J.

    1996-01-01

    Condor is a distributed batch system for sharing the workload of compute-intensive jobs in a pool of unix workstations connected by a network. In such a Condor pool, idle machines are spotted by Condor and allocated to queued jobs, thus putting otherwise unutilized capacity to efficient use. When

  20. Post-deployment usability evaluation of a radiology workstation

    NARCIS (Netherlands)

    Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi; Oudkerk, Matthijs; van Ooijen, Peter

    2015-01-01

    Objective To evaluate the usability of a radiology workstation after deployment in a hospital. Significance In radiology, it is difficult to perform valid pre-deployment usability evaluations due to the heterogeneity of the user group, the complexity of the radiological workflow, and the complexity

  1. BioPhotonics Workstation: a university tech transfer challenge

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Bañas, Andrew Rafael; Tauro, Sandeep

    2011-01-01

    Conventional optical trapping or tweezing is often limited in the achievable trapping range because of high numerical aperture and imaging requirements. To circumvent this, we are developing a next generation BioPhotonics Workstation platform that supports extension modules through a long working...

  2. A methodology to emulate and evaluate a productive virtual workstation

    Science.gov (United States)

    Krubsack, David; Haberman, David

    1992-01-01

    The Advanced Display and Computer Augmented Control (ADCACS) Program at ACT is sponsored by NASA Ames to investigate the broad field of technologies which must be combined to design a 'virtual' workstation for the Space Station Freedom. This program is progressing in several areas and resulted in the definition of requirements for a workstation. A unique combination of technologies at the ACT Laboratory have been networked to effectively create an experimental environment. This experimental environment allows the integration of nonconventional input devices with a high power graphics engine within the framework of an expert system shell which coordinates the heterogeneous inputs with the 'virtual' presentation. The flexibility of the workstation is evolved as experiments are designed and conducted to evaluate the condition descriptions and rule sets of the expert system shell and its effectiveness in driving the graphics engine. Workstation productivity has been defined by the achievable performance in the emulator of the calibrated 'sensitivity' of input devices, the graphics presentation, the possible optical enhancements to achieve a wide field of view color image and the flexibility of conditional descriptions in the expert system shell in adapting to prototype problems.

  3. Effect of One Carpet Weaving Workstation on Upper Trapezius Fatigue

    Directory of Open Access Journals (Sweden)

    Neda Mahdavi

    2016-03-01

    Full Text Available Introduction: This study aimed to investigate the effect of carpet weaving at a proposed workstation on Upper Trapezius (UTr fatigue during a task cycle. Fatigue in the shoulder is one of the most important precursors for upper limb musculoskeletal disorders. One of the most prevalent musculoskeletal disorders between carpet weavers is disorder of the shoulder region. Methods: This cross-sectional study, included eight females and three males. During an 80-minute cycle of carpet weaving, Electromyography (EMG signals of right and left UTr were recorded by the surface EMG, continuously. After raw signals were processed, MPF and RMS were considered as EMG amplitude and frequency parameters. Time series model and JASA methods were used to assess and classify the EMG parameter changes during the working time. Results: According to the JASA method, 58%, 16%, 8% and 8% of the participants experienced fatigue, force increase, force decrease and recovery, respectively in the right UTr. Also, 50%, 25%, 8% and 16% of the participants experienced fatigue, force increase, force decrease and recovery, respectively in the left UTr. Conclusions: For the major portion of the weavers, dominant status in Left and right UTr was fatigue, at the proposed workstation during a carpet weaving task cycle. The results of the study provide detailed information for optimal design of workstations. Further studies should focus on fatigue in various muscles and time periods for designing an appropriate and ergonomics carpet weaving workstation

  4. Initial experience with a nuclear medicine viewing workstation

    Science.gov (United States)

    Witt, Robert M.; Burt, Robert W.

    1992-07-01

    Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.

  5. Users Guide to VSMOKE-GIS for Workstations

    Science.gov (United States)

    Mary F. Harms; Leonidas G. Lavdas

    1997-01-01

    VSMOKE-GIS was developed to help prescribed burners in the national forests of the Southeastern United States visualize smoke dispersion and to plan prescribed burns. Developed for use on workstations, this decision-support system consists of a graphical user interface, written in Arc/Info Arc Macro Language, and is linked to a FORTRAN computer program. VSMOKE-GIS...

  6. Ergonomics in the computer workstation | Karoney | East African ...

    African Journals Online (AJOL)

    Background: Awareness of effects of long term use of computer and application of ergonomics in the computer workstation is important for preventing musculoskeletal disorders, eyestrain and psychosocial effects. Objectives: To determine the awareness of ºphysical and psychological effects of prolonged computer usage ...

  7. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  8. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  9. The Impact of Ergonomically Designed Workstations on Shoulder EMG Activity during Carpet Weaving

    Directory of Open Access Journals (Sweden)

    Majid Motamedzade

    2014-12-01

    Full Text Available Background: The present study aimed to evaluate the biomechanical exposure to the trapezius muscle activity in female weavers for a prolonged period in the workstation A (suggested by previous studies and workstation B (proposed by the present study. Methods: Electromyography data were collected from nine females during four hours for each ergonomically designed workstation at the Ergonomics Laboratory, Hamadan, Iran. The design criteria for ergonomically designed workstations were: 1 weaving height (20 and 3 cm above elbow height for workstations A and B, respectively, and 2 seat type (10° and 0° forwardsloping seat for workstations A and B, respectively. Results: The amplitude probability distribution function (APDF analysis showed that the left and right upper trapezius muscle activity was almost similar at each workstation. Trapezius muscle activity in the workstation A was significantly greater than workstations B (P<0.001. Conclusion: In general, use of workstation B leads to significantly reduced muscle activity levels in the upper trapezius as compared to workstation A in weavers. Despite the positive impact of workstation B in reducing trapezius muscle activity, it seems that constrained postures of the upper arm during weaving may be associated with musculoskeletal symptoms.

  10. Parallelization of simulation code for liquid-gas model of lattice-gas fluid

    International Nuclear Information System (INIS)

    Kawai, Wataru; Ebihara, Kenichi; Kume, Etsuo; Watanabe, Tadashi

    2000-03-01

    A simulation code for hydrodynamical phenomena which is based on the liquid-gas model of lattice-gas fluid is parallelized by using MPI (Message Passing Interface) library. The parallelized code can be applied to the larger size of the simulations than the non-parallelized code. The calculation times of the parallelized code on VPP500 (Vector-Parallel super computer with dispersed memory units), AP3000 (Scalar-parallel server with dispersed memory units), and a workstation cluster decreased in inverse proportion to the number of processors. (author)

  11. Parallel preconditioned conjugate gradient algorithm applied to neutron diffusion problem

    International Nuclear Information System (INIS)

    Majumdar, A.; Martin, W.R.

    1992-01-01

    Numerical solution of the neutron diffusion problem requires solving a linear system of equations such as Ax = b, where A is an n x n symmetric positive definite (SPD) matrix; x and b are vectors with n components. The preconditioned conjugate gradient (PCG) algorithm is an efficient iterative method for solving such a linear system of equations. In this paper, the authors describe the implementation of a parallel PCG algorithm on a shared memory machine (BBN TC2000) and on a distributed workstation (IBM RS6000) environment created by the parallel virtual machine parallelization software

  12. Evaluation of PC-based diagnostic radiology workstations

    International Nuclear Information System (INIS)

    Pollack, T.; Brueggenwerth, G.; Kaulfuss, K.; Niederlag, W.

    2000-01-01

    Material and Methods: During February 1999 and September 1999 medical users at the hospital Dresden-Friedrichstadt Germany had tested 7 types of radiology diagnostic workstations. Two types of test methods were used: In test type 1 ergonomic and handling functions were evaluated impartial according to 78 selected user requirements. In test type 2 radiologists and radiographers (3+4) performed 23 work flow steps with a subjectively evaluation. Results: By using a progressive rating no product could fully meet the user requirements. As a result of the summary evaluation for test 1 and test 2 the following compliance rating was calculated for the different products: Rad Works (66%), Magic View (63%), ID-Report (58%), Impax 3000 (53%), Medical Workstation (52%), Pathspeed (46%) and Autorad (39%). (orig.) [de

  13. ISDN communication: Its workstation technology and application system

    Energy Technology Data Exchange (ETDEWEB)

    Sugimura, T; Ogiwara, Y; Saito, T [Hitachi, Ltd., Tokyo (Japan)

    1991-07-01

    This report describes technology for integrated services digital network (ISDN) which allows workstations to process multimedia data and application systems of advanced group teleworking which use such technology. Hitachi has developed workstations which are more powerful, have more functions, and have larger memory capacities. These factors allowed media which require high-speed processing of large quantities of voice and image data to be integrated into the world of conventional text data processing and communications. In addition, the application of group teleworking system has a large impact through the improvements in the office environment, the changes in the style of office work, and the appearance of new businesses. A prototype of this system was exhibited and demonstrated at TELECOM91. 1 ref., 4 figs., 2 tabs.

  14. An approach to develop a PSA workstation in KAERI

    International Nuclear Information System (INIS)

    Kim, T. W.; Han, S. H.; Park, C. K.

    1995-01-01

    This paper describes three kinds of efforts for the development of PSA workstation in KAERI; Development of a PSA tool, KIRAP, Reliability Database Development, Living PSA tool development. Korea has 9 nuclear power plants (NPPs) in operation and 9 NPPs under design or construction. For the NPPs recently constructed or designed, the probabilistic safety assessments (PSAs) have been performed by the Government requirements. For these PSAs, the MSDOS version of KIRAP has been used. For the consistent data management and the easiness of information handling needed in PSA, APSA workstation, KIRAP-Win is under development under Windows environment. For the reliability database on component failure rate, human error rate, and common cause failure rate, data used in international PSA or reliability data handbook are collected and processed to use in Korean new plants' PSAs. Finally, an effort for the development of a living PSA tool in KAERI based on dynamic PSA concept is described

  15. Experience with workstations for accelerator control at the CERN SPS

    International Nuclear Information System (INIS)

    Ogle, A.; Ulander, J.; Wilkie, I.

    1990-01-01

    The CERN super proton synchrotron (SPS) control system is currently undergoing a major long-term upgrade. This paper reviews progress on the high-level application software with particular reference to the operator interface. An important feature of the control-system upgrade is the move from consoles with a number of fixed screens and limited multitasking ability to workstations with the potential to display a large number of windows and perform a number of independent tasks simultaneously. This workstation environment thus permits the operator to run tasks in one machine for which he previously had to monopolize two or even three old consoles. However, the environment also allows the operator to cover the screen with a multitude of windows, leading to complete confusion. Initial requests to present some form of 'global status' of the console proved to be naive, and several iterations were necessary before the operators were satisfied. (orig.)

  16. Physics and detector simulation facility Type O workstation specifications

    International Nuclear Information System (INIS)

    Chartrand, G.; Cormell, L.R.; Hahn, R.; Jacobson, D.; Johnstad, H.; Leibold, P.; Marquez, M.; Ramsey, B.; Roberts, L.; Scipioni, B.; Yost, G.P.

    1990-11-01

    This document specifies the requirements for the front-end network of workstations of a distributed computing facility. This facility will be needed to perform the physics and detector simulations for the design of Superconducting Super Collider (SSC) detectors, and other computations in support of physics and detector needs. A detailed description of the computer simulation facility is given in the overall system specification document. This document provides revised subsystem specifications for the network of monitor-less Type 0 workstations. The requirements specified in this document supersede the requirements given. In Section 2 a brief functional description of the facility and its use are provided. The list of detailed specifications (vendor requirements) is given in Section 3 and the qualifying requirements (benchmarks) are described in Section 4

  17. Reading of Beowulf / Ilmar Anvelt

    Index Scriptorium Estoniae

    Anvelt, Ilmar, 1949-

    2013-01-01

    Kirjandusfestivali Prima Vista raames toimunud anglo-saksi eepose täies mahus ettelugemisest nii vanainglise keeles kui eestikeelses tõlkes TÜ inglise filoloogia osakonna õppejõudude ja tudengite poolt

  18. Field analysis: approach to the design of teleoperator workstation

    International Nuclear Information System (INIS)

    Saint-Jean, T.; Lescoat, D.A.

    1986-04-01

    Following a brief review of theoretical scope this paper will characterize a methodology to the design of teleoperation workstations. This methodology is illustrated by an example - field analysis of a telemanipulation task in a hot cell. Practical informations are given: operating strategy different from the written procedure, team work organization, different skills. Recommendations are suggested as regards the writing of procedures, the training of personnel and the work organisation

  19. Functionalized 2PP structures for the BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Matsuoka, Tomoyo; Nishi, Masayuki; Sakakura, Masaaki

    2011-01-01

    In its standard version, our BioPhotonics Workstation (BWS) can generate multiple controllable counter-propagating beams to create real-time user-programmable optical traps for stable three-dimensional control and manipulation of a plurality of particles. The combination of the platform with micr...... on the BWS platform by functionalizing them with silica-based sol-gel materials inside which dyes can be entrapped....

  20. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  1. Integrated model for line balancing with workstation inventory management

    OpenAIRE

    Dilip Roy; Debdip khan

    2010-01-01

    In this paper, we address the optimization of an integrated line balancing process with workstation inventory management. While doing so, we have studied the interconnection between line balancing and its conversion process. Almost each and every moderate to large manufacturing industry depends on a long and integrated supply chain, consisting of inbound logistic, conversion process and outbound logistic. In this sense an approach addresses a very general problem of integrated line balancing....

  2. Montecarlo Simulations for a Lep Experiment with Unix Workstation Clusters

    Science.gov (United States)

    Bonesini, M.; Calegari, A.; Rossi, P.; Rossi, V.

    Modular systems of RISC CPU based computers have been implemented for large productions of Montecarlo simulated events for the DELPHI experiment at CERN. From a pilot system based on DEC 5000 CPU’s, a full size system based on a CONVEX C3820 UNIX supercomputer and a cluster of HP 735 workstations has been put into operation as a joint effort between INFN Milano and CILEA.

  3. Parallel Calculations in LS-DYNA

    Science.gov (United States)

    Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey

    2017-11-01

    Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.

  4. ARCIMBOLDO_LITE: single-workstation implementation and use.

    Science.gov (United States)

    Sammito, Massimo; Millán, Claudia; Frieske, Dawid; Rodríguez-Freire, Eloy; Borges, Rafael J; Usón, Isabel

    2015-09-01

    ARCIMBOLDO solves the phase problem at resolutions of around 2 Å or better through massive combination of small fragments and density modification. For complex structures, this imposes a need for a powerful grid where calculations can be distributed, but for structures with up to 200 amino acids in the asymmetric unit a single workstation may suffice. The use and performance of the single-workstation implementation, ARCIMBOLDO_LITE, on a pool of test structures with 40-120 amino acids and resolutions between 0.54 and 2.2 Å is described. Inbuilt polyalanine helices and iron cofactors are used as search fragments. ARCIMBOLDO_BORGES can also run on a single workstation to solve structures in this test set using precomputed libraries of local folds. The results of this study have been incorporated into an automated, resolution- and hardware-dependent parameterization. ARCIMBOLDO has been thoroughly rewritten and three binaries are now available: ARCIMBOLDO_LITE, ARCIMBOLDO_SHREDDER and ARCIMBOLDO_BORGES. The programs and libraries can be downloaded from http://chango.ibmb.csic.es/ARCIMBOLDO_LITE.

  5. Energy-efficiency based classification of the manufacturing workstation

    Science.gov (United States)

    Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.

    2017-08-01

    EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.

  6. User interface on networked workstations for MFTF plasma diagnostic instruments

    International Nuclear Information System (INIS)

    Renbarger, V.L.; Balch, T.R.

    1985-01-01

    A network of Sun-2/170 workstations is used to provide an interface to the MFTF-B Plasma Diagnostics System at Lawrence Livermore National Laboratory. The Plasma Diagnostics System (PDS) is responsible for control of MFTF-B plasma diagnostic instrumentation. An EtherNet Local Area Network links the workstations to a central multiprocessing system which furnishes data processing, data storage and control services for PDS. These workstations permit a physicist to command data acquisition, data processing, instrument control, and display of results. The interface is implemented as a metaphorical desktop, which helps the operator form a mental model of how the system works. As on a real desktop, functions are provided by sheets of paper (windows on a CRT screen) called worksheets. The worksheets may be invoked by pop-up menus and may be manipulated with a mouse. These worksheets are actually tasks that communicate with other tasks running in the central computer system. By making entries in the appropriate worksheet, a physicist may specify data acquisition or processing, control a diagnostic, or view a result

  7. A design study investigating augmented reality and photograph annotation in a digitalized grossing workstation

    Directory of Open Access Journals (Sweden)

    Joyce A Chow

    2017-01-01

    Full Text Available Context: Within digital pathology, digitalization of the grossing procedure has been relatively underexplored in comparison to digitalization of pathology slides. Aims: Our investigation focuses on the interaction design of an augmented reality gross pathology workstation and refining the interface so that information and visualizations are easily recorded and displayed in a thoughtful view. Settings and Design: The work in this project occurred in two phases: the first phase focused on implementation of an augmented reality grossing workstation prototype while the second phase focused on the implementation of an incremental prototype in parallel with a deeper design study. Subjects and Methods: Our research institute focused on an experimental and “designerly” approach to create a digital gross pathology prototype as opposed to focusing on developing a system for immediate clinical deployment. Statistical Analysis Used: Evaluation has not been limited to user tests and interviews, but rather key insights were uncovered through design methods such as “rapid ethnography” and “conversation with materials”. Results: We developed an augmented reality enhanced digital grossing station prototype to assist pathology technicians in capturing data during examination. The prototype uses a magnetically tracked scalpel to annotate planned cuts and dimensions onto photographs taken of the work surface. This article focuses on the use of qualitative design methods to evaluate and refine the prototype. Our aims were to build on the strengths of the prototype's technology, improve the ergonomics of the digital/physical workstation by considering numerous alternative design directions, and to consider the effects of digitalization on personnel and the pathology diagnostics information flow from a wider perspective. A proposed interface design allows the pathology technician to place images in relation to its orientation, annotate directly on the

  8. A Design Study Investigating Augmented Reality and Photograph Annotation in a Digitalized Grossing Workstation.

    Science.gov (United States)

    Chow, Joyce A; Törnros, Martin E; Waltersson, Marie; Richard, Helen; Kusoffsky, Madeleine; Lundström, Claes F; Kurti, Arianit

    2017-01-01

    Within digital pathology, digitalization of the grossing procedure has been relatively underexplored in comparison to digitalization of pathology slides. Our investigation focuses on the interaction design of an augmented reality gross pathology workstation and refining the interface so that information and visualizations are easily recorded and displayed in a thoughtful view. The work in this project occurred in two phases: the first phase focused on implementation of an augmented reality grossing workstation prototype while the second phase focused on the implementation of an incremental prototype in parallel with a deeper design study. Our research institute focused on an experimental and "designerly" approach to create a digital gross pathology prototype as opposed to focusing on developing a system for immediate clinical deployment. Evaluation has not been limited to user tests and interviews, but rather key insights were uncovered through design methods such as " rapid ethnography " and " conversation with materials ". We developed an augmented reality enhanced digital grossing station prototype to assist pathology technicians in capturing data during examination. The prototype uses a magnetically tracked scalpel to annotate planned cuts and dimensions onto photographs taken of the work surface. This article focuses on the use of qualitative design methods to evaluate and refine the prototype. Our aims were to build on the strengths of the prototype's technology, improve the ergonomics of the digital/physical workstation by considering numerous alternative design directions, and to consider the effects of digitalization on personnel and the pathology diagnostics information flow from a wider perspective. A proposed interface design allows the pathology technician to place images in relation to its orientation, annotate directly on the image, and create linked information. The augmented reality magnetically tracked scalpel reduces tool switching though

  9. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  10. Comparison of computer workstation with film for detecting setup errors

    International Nuclear Information System (INIS)

    Fritsch, D.S.; Boxwala, A.A.; Raghavan, S.; Coffee, C.; Major, S.A.; Muller, K.E.; Chaney, E.L.

    1997-01-01

    Purpose/Objective: Workstations designed for portal image interpretation by radiation oncologists provide image displays and image processing and analysis tools that differ significantly compared with the standard clinical practice of inspecting portal films on a light box. An implied but unproved assumption associated with the clinical implementation of workstation technology is that patient care is improved, or at least not adversely affected. The purpose of this investigation was to conduct observer studies to test the hypothesis that radiation oncologists can detect setup errors using a workstation at least as accurately as when following standard clinical practice. Materials and Methods: A workstation, PortFolio, was designed for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools to enhance images; align cross-hairs, field edges, and anatomic structures on reference and acquired images; measure distances and angles; and view registered images superimposed on one another. In a well designed and carefully controlled observer study, nine radiation oncologists, including attendings and residents, used PortFolio to detect setup errors in realistic digitally reconstructed portal (DRPR) images computed from the NLM visible human data using a previously described approach † . Compared with actual portal images where absolute truth is ill defined or unknown, the DRPRs contained known translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Twenty DRPRs with randomly induced errors were computed for each site. The induced errors were constrained to a plane at the isocenter of the target volume and perpendicular to the central axis of the treatment beam. Images used in the study were also printed on film. Observers interpreted the film-based images using standard clinical practice. The images were reviewed in eight sessions. During each session five images were

  11. Evaluation of DEC`s GIGAswitch for distributed parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Hutchins, J.; Brandt, J.

    1993-10-01

    One of Sandia`s research efforts is to reduce the end-to-end communication delay in a parallel-distributed computing environment. GIGAswitch is DEC`s implementation of a gigabit local area network based on switched FDDI technology. Using the GIGAswitch, the authors intend to minimize the medium access latency suffered by shared-medium FDDI technology. Experimental results show that the GIGAswitch adds 16.5 microseconds of switching and bridging delay to an end-to-end communication. Although the added latency causes a 1.8% throughput degradation and a 5% line efficiency degradation, the availability of dedicated bandwidth is much more than what is available to a workstation on a shared medium. For example, ten directly connected workstations each would have a dedicated bandwidth of 95 Mbps, but if they were sharing the FDDI bandwidth, each would have 10% of the total bandwidth, i.e., less than 10 Mbps. In addition, they have found that when there is no output port contention, the switch`s aggregate bandwidth will scale up to multiples of its port bandwidth. However, with output port contention, the throughput and latency performance suffered significantly. Their mathematical and simulation models indicate that the GIGAswitch line efficiency could be as low as 63% when there are nine input ports contending for the same output port. The data indicate that the delay introduced by contention at the server workstation is 50 times that introduced by the GIGAswitch. The authors conclude that the GIGAswitch meets the performance requirements of today`s high-end workstations and that the switched FDDI technology provides an alternative that utilizes existing workstation interfaces while increasing the aggregate bandwidth. However, because the speed of workstations is increasing by a factor of 2 every 1.5 years, the switched FDDI technology is only good as an interim solution.

  12. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  13. Compiler and Runtime Support for Programming in Adaptive Parallel Environments

    Science.gov (United States)

    1998-10-15

    noother job is waiting for resources, and use a smaller number of processors when other jobs needresources. Setia et al. [15, 20] have shown that such...15] Vijay K. Naik, Sanjeev Setia , and Mark Squillante. Performance analysis of job scheduling policiesin parallel supercomputing environments. In...on networks ofheterogeneous workstations. Technical Report CSE-94-012, Oregon Graduate Institute of Scienceand Technology, 1994.[20] Sanjeev Setia

  14. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  15. Clinical impact and value of workstation single sign-on.

    Science.gov (United States)

    Gellert, George A; Crouch, John F; Gibson, Lynn A; Conklin, George S; Webster, S Luke; Gillean, John A

    2017-05-01

    CHRISTUS Health began implementation of computer workstation single sign-on (SSO) in 2015. SSO technology utilizes a badge reader placed at each workstation where clinicians swipe or "tap" their identification badges. To assess the impact of SSO implementation in reducing clinician time logging in to various clinical software programs, and in financial savings from migrating to a thin client that enabled replacement of traditional hard drive computer workstations. Following implementation of SSO, a total of 65,202 logins were sampled systematically during a 7day period among 2256 active clinical end users for time saved in 6 facilities when compared to pre-implementation. Dollar values were assigned to the time saved by 3 groups of clinical end users: physicians, nurses and ancillary service providers. The reduction of total clinician login time over the 7day period showed a net gain of 168.3h per week of clinician time - 28.1h (2.3 shifts) per facility per week. Annualized, 1461.2h of mixed physician and nursing time is liberated per facility per annum (121.8 shifts of 12h per year). The annual dollar cost savings of this reduction of time expended logging in is $92,146 per hospital per annum and $1,658,745 per annum in the first phase implementation of 18 hospitals. Computer hardware equipment savings due to desktop virtualization increases annual savings to $2,333,745. Qualitative value contributions to clinician satisfaction, reduction in staff turnover, facilitation of adoption of EHR applications, and other benefits of SSO are discussed. SSO had a positive impact on clinician efficiency and productivity in the 6 hospitals evaluated, and is an effective and cost-effective method to liberate clinician time from repetitive and time consuming logins to clinical software applications. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. The image-interpretation-workstation of the future: lessons learned

    Science.gov (United States)

    Maier, S.; van de Camp, F.; Hafermann, J.; Wagner, B.; Peinsipp-Byma, E.; Beyerer, J.

    2017-05-01

    In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.

  17. Optimizing the pathology workstation "cockpit": Challenges and solutions

    Directory of Open Access Journals (Sweden)

    Elizabeth A Krupinski

    2010-01-01

    Full Text Available The 21 st century has brought numerous changes to the clinical reading (i.e., image or virtual pathology slide interpretation environment of pathologists and it will continue to change even more dramatically as information and communication technologies (ICTs become more widespread in the integrated healthcare enterprise. The extent to which these changes impact the practicing pathologist differ as a function of the technology under consideration, but digital "virtual slides" and the viewing of images on computer monitors instead of glass slides through a microscope clearly represents a significant change in the way that pathologists extract information from these images and render diagnostic decisions. One of the major challenges facing pathologists in this new era is how to best optimize the pathology workstation, the reading environment and the new and varied types of information available in order to ensure efficient and accurate processing of this information. Although workstations can be stand-alone units with images imported via external storage devices, this scenario is becoming less common as pathology departments connect to information highways within their hospitals and to external sites. Picture Archiving and Communications systems are no longer confined to radiology departments but are serving the entire integrated healthcare enterprise, including pathology. In radiology, the workstation is often referred to as the "cockpit" with a "digital dashboard" and the reading room as the "control room." Although pathology has yet to "go digital" to the extent that radiology has, lessons derived from radiology reading "cockpits" can be quite valuable in setting up the digital pathology reading room. In this article, we describe the concept of the digital dashboard and provide some recent examples of informatics-based applications that have been shown to improve the workflow and quality in digital reading environments.

  18. Non-contact methods for NDT of aeronautical structures : An image processing workstation for thermography

    OpenAIRE

    Azzarelli, Luciano; Chimenti, Massimo; Salvetti, Ovidio

    1992-01-01

    The main goals of the Istituto di Elaborazione della Informazione in Task 4., Subtasks 4.3.1 (Image Processing) and 4.3.2 (Workstation Architecture) were the study of thermograms features, the design of the architecture of a customized workstation and the project of specialized algorithms for thermal image analysis. Thermograms features pertain to data acquisition, data archiving and data processing; following general study some basic requirements for the workstation were defined. "Data acqui...

  19. Viewport: An object-oriented approach to integrate workstation software for tile and stack mode display

    OpenAIRE

    Ghosh, Srinka; Andriole, Katherine P.; Avrin, David E.

    1997-01-01

    Diagnostic workstation design has migrated towards display presentation in one of two modes: tiled images or stacked images. It is our impression that the workstation setup or configuration in each of these two modes is rather distinct. We sought to establish a commonality to simplify software design, and to enable a single descriptor method to facilitate folder manager development of “hanging” protocols. All current workstation designs use a combination of “off-screen” and “on-screen” memory...

  20. Supervisory Control Technique For An Assembly Workstation As A Dynamic Discrete Event System

    Directory of Open Access Journals (Sweden)

    Daniela Cristina CERNEGA

    2001-12-01

    Full Text Available This paper proposes a control problem statement in the framework of supervisory control technique for the assembly workstations. A desired behaviour of an assembly workstation is analysed. The behaviour of such a workstation is cyclic and some linguistic properties are established. In this paper, it is proposed an algorithm for the computation of the supremal controllable language of the closed system desired language. Copyright © 2001 IFAC.

  1. High-performance floating-point image computing workstation for medical applications

    Science.gov (United States)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  2. Parallelization of MCNP4 code by using simple FORTRAN algorithms

    International Nuclear Information System (INIS)

    Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.

    1993-12-01

    Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)

  3. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  4. Evaluating biomechanics of user-selected sitting and standing computer workstation.

    Science.gov (United States)

    Lin, Michael Y; Barbir, Ana; Dennerlein, Jack T

    2017-11-01

    A standing computer workstation has now become a popular modern work place intervention to reduce sedentary behavior at work. However, user's interaction related to a standing computer workstation and its differences with a sitting workstation need to be understood to assist in developing recommendations for use and set up. The study compared the differences in upper extremity posture and muscle activity between user-selected sitting and standing workstation setups. Twenty participants (10 females, 10 males) volunteered for the study. 3-D posture, surface electromyography, and user-reported discomfort were measured while completing simulated tasks with each participant's self-selected workstation setups. Sitting computer workstation associated with more non-neutral shoulder postures and greater shoulder muscle activity, while standing computer workstation induced greater wrist adduction angle and greater extensor carpi radialis muscle activity. Sitting computer workstation also associated with greater shoulder abduction postural variation (90th-10th percentile) while standing computer workstation associated with greater variation for should rotation and wrist extension. Users reported similar overall discomfort levels within the first 10 min of work but had more than twice as much discomfort while standing than sitting after 45 min; with most discomfort reported in the low back for standing and shoulder for sitting. These different measures provide understanding in users' different interactions with sitting and standing and by alternating between the two configurations in short bouts may be a way of changing the loading pattern on the upper extremity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Workout at work: laboratory test of psychological and performance outcomes of active workstations.

    Science.gov (United States)

    Sliter, Michael; Yuan, Zhenyu

    2015-04-01

    With growing concerns over the obesity epidemic in the United States and other developed countries, many organizations have taken steps to incorporate healthy workplace practices. However, most workers are still sedentary throughout the day--a major contributor to individual weight gain. The current study sought to gather preliminary evidence of the efficacy of active workstations, which are a possible intervention that could increase employees' physical activity while they are working. We conducted an experimental study, in which boredom, task satisfaction, stress, arousal, and performance were evaluated and compared across 4 randomly assigned conditions: seated workstation, standing workstation, cycling workstation, and walking workstation. Additionally, body mass index (BMI) and exercise habits were examined as moderators to determine whether differences in these variables would relate to increased benefits in active conditions. The results (n = 180) showed general support for the benefits of walking workstations, whereby participants in the walking condition had higher satisfaction and arousal and experienced less boredom and stress than those in the passive conditions. Cycling workstations, on the other hand, tended to relate to reduced satisfaction and performance when compared with other conditions. The moderators did not impact these relationships, indicating that walking workstations might have psychological benefits to individuals, regardless of BMI and exercise habits. The results of this study are a preliminary step in understanding the work implications of active workstations. (c) 2015 APA, all rights reserved).

  6. Files for workstations with ionizing radiation risks: variation in the use of gamma densitometers

    International Nuclear Information System (INIS)

    Tournadre, A.

    2008-01-01

    After a brief presentation of the different gamma-densitometers proposed by MLPC to measure roadway density, and having outlined the support role of the provider, the author describes the form and content of workstation files for workstations exhibiting a risk related to ionizing radiation. He gives an analytical overview of dose calculation: analysis of instrument use phases, exposure duration, dose rates and way of introducing these dose rates in the workstation file. He formulates how different procedures are to be followed by the radiation protection expert within the company. He outlines that workstation files are very useful as information feedback tool

  7. (Nearly) portable PIC code for parallel computers

    International Nuclear Information System (INIS)

    Decyk, V.K.

    1993-01-01

    As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes

  8. Internationalization of healthcare applications: a generic approach for PACS workstations.

    Science.gov (United States)

    Hussein, R; Engelmann, U; Schroeter, A; Meinzer, H P

    2004-01-01

    Along with the revolution of information technology and the increasing use of computers world-wide, software providers recognize the emerging need for internationalized, or global, software applications. The importance of internationalization comes from its benefits such as addressing a broader audience, making the software applications more accessible, easier to use, more flexible to support and providing users with more consistent information. In addition, some governmental agencies, e.g., in Spain, accept only fully localized software. Although the healthcare communication standards, namely, Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7) support wide areas of internationalization, most of the implementers are still protective about supporting the complex languages. This paper describes a generic internationalization approach for Picture Archiving and Communication System (PACS) workstations. The Unicode standard is used to internationalize the application user interface. An encoding converter was developed to encode and decode the data between the rendering module (in Unicode encoding) and the DICOM data (in ISO 8859 encoding). An integration gateway was required to integrate the internationalized PACS components with the different PACS installations. To introduce a pragmatic example, the described approach was applied to the CHILI PACS workstation. The approach has enabled the application to handle the different internationalization aspects transparently, such as supporting complex languages, switching between different languages at runtime, and supporting multilingual clinical reports. In the healthcare enterprises, internationalized applications play an essential role in supporting a seamless flow of information between the heterogeneous multivendor information systems.

  9. Temporal digital subtraction radiography with a personal computer digital workstation

    International Nuclear Information System (INIS)

    Kircos, L.; Holt, W.; Khademi, J.

    1990-01-01

    Technique have been developed and implemented on a personal computer (PC)-based digital workstation to accomplish temporal digital subtraction radiography (TDSR). TDSR is useful in recording radiologic change over time. Thus, this technique is useful not only for monitoring chronic disease processes but also for monitoring the temporal course of interventional therapies. A PC-based digital workstation was developed on a PC386 platform with add-in hardware and software. Image acquisition, storage, and processing was accomplished using 512 x 512 x 8- or 12-bit frame grabber. Software and hardware were developed to accomplish image orientation, registration, gray scale compensation, subtraction, and enhancement. Temporal radiographs of the jaws were made in a fixed and reproducible orientation between the x-ray source and image receptor enabling TDSR. Temporal changes secondary to chronic periodontal disease, osseointegration of endosseous implants, and wound healing were demonstrated. Use of TDSR for chest imaging was also demonstrated with identification of small, subtle focal masses that were not apparent with routine viewing. The large amount of radiologic information in images of the jaws and chest may obfuscate subtle changes that TDSR seems to identify. TDSR appears to be useful as a tool to record temporal and subtle changes in radiologic images

  10. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  11. Emulating conventional operator interfaces on window-based workstations

    International Nuclear Information System (INIS)

    Carr, G.P.

    1990-01-01

    This paper explores an approach to support the LAMPF and PSR control systems on VAX/VMS workstations using DECwindows and VI Corporation Data Views as the operator interface. The PSR control system was recently turned over to MP division and the two control-system staffs were merged into one group. One of the goals of this new group is to develop a common workstation-based operator console and interface which can be used in a single control room controlling both the linac and proton storage ring. The new console operator interface will need a high-level graphics toolkit for its implementation. During the conversion to the new consoles it will also probably be necessary to write a package to emulate the current operator interfaces at the software level. This paper describes a project to evaluate the appropriateness of VI Corporation's Data Views graphics package for use in the LAMPF control-system environment by using it to write an emulation of the LAMPF touch-panel interface to a large LAMPF control-system application program. A secondary objective of this project was to explore any productivity increases that might be realized by using an object-oriented graphics package and graphics editor. (orig.)

  12. Design and Development of an Integrated Workstation Automation Hub

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Andrew; Ghatikar, Girish; Sartor, Dale; Lanzisera, Steven

    2015-03-30

    Miscellaneous Electronic Loads (MELs) account for one third of all electricity consumption in U.S. commercial buildings, and are drivers for a significant energy use in India. Many of the MEL-specific plug-load devices are concentrated at workstations in offices. The use of intelligence, and integrated controls and communications at the workstation for an Office Automation Hub – offers the opportunity to improve both energy efficiency and occupant comfort, along with services for Smart Grid operations. Software and hardware solutions are available from a wide array of vendors for the different components, but an integrated system with interoperable communications is yet to be developed and deployed. In this study, we propose system- and component-level specifications for the Office Automation Hub, their functions, and a prioritized list for the design of a proof-of-concept system. Leveraging the strength of both the U.S. and India technology sectors, this specification serves as a guide for researchers and industry in both countries to support the development, testing, and evaluation of a prototype product. Further evaluation of such integrated technologies for performance and cost is necessary to identify the potential to reduce energy consumptions in MELs and to improve occupant comfort.

  13. Biomek Cell Workstation: A Variable System for Automated Cell Cultivation.

    Science.gov (United States)

    Lehmann, R; Severitt, J C; Roddelkopf, T; Junginger, S; Thurow, K

    2016-06-01

    Automated cell cultivation is an important tool for simplifying routine laboratory work. Automated methods are independent of skill levels and daily constitution of laboratory staff in combination with a constant quality and performance of the methods. The Biomek Cell Workstation was configured as a flexible and compatible system. The modified Biomek Cell Workstation enables the cultivation of adherent and suspension cells. Until now, no commercially available systems enabled the automated handling of both types of cells in one system. In particular, the automated cultivation of suspension cells in this form has not been published. The cell counts and viabilities were nonsignificantly decreased for cells cultivated in AutoFlasks in automated handling. The proliferation of manual and automated bioscreening by the WST-1 assay showed a nonsignificant lower proliferation of automatically disseminated cells associated with a mostly lower standard error. The disseminated suspension cell lines showed different pronounced proliferations in descending order, starting with Jurkat cells followed by SEM, Molt4, and RS4 cells having the lowest proliferation. In this respect, we successfully disseminated and screened suspension cells in an automated way. The automated cultivation and dissemination of a variety of suspension cells can replace the manual method. © 2015 Society for Laboratory Automation and Screening.

  14. Parallel implementations of 2D explicit Euler solvers

    International Nuclear Information System (INIS)

    Giraud, L.; Manzini, G.

    1996-01-01

    In this work we present a subdomain partitioning strategy applied to an explicit high-resolution Euler solver. We describe the design of a portable parallel multi-domain code suitable for parallel environments. We present several implementations on a representative range of MlMD computers that include shared memory multiprocessors, distributed virtual shared memory computers, as well as networks of workstations. Computational results are given to illustrate the efficiency, the scalability, and the limitations of the different approaches. We discuss also the effect of the communication protocol on the optimal domain partitioning strategy for the distributed memory computers

  15. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  16. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  17. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  18. 76 FR 10403 - Hewlett Packard (HP), Global Product Development, Engineering Workstation Refresh Team, Working...

    Science.gov (United States)

    2011-02-24

    ...), Global Product Development, Engineering Workstation Refresh Team, Working On-Site at General Motors..., Non-Information Technology Business Development Team and Engineering Application Support Team, working... Hewlett Packard, Global Product Development, Engineering Workstation Refresh Team, working on-site at...

  19. 40 CFR 86.1312-2007 - Filter stabilization and microbalance workstation environmental conditions, microbalance...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Filter stabilization and microbalance workstation environmental conditions, microbalance specifications, and particulate matter filter handling and... Particulate Exhaust Test Procedures § 86.1312-2007 Filter stabilization and microbalance workstation...

  20. The effect of dynamic workstations on the performance of various computer and office-based tasks

    NARCIS (Netherlands)

    Burford, E.M.; Botter, J.; Commissaris, D.; Könemann, R.; Hiemstra-Van Mastrigt, S.; Ellegast, R.P.

    2013-01-01

    The effect of different workstations, conventional and dynamic, on different types of performance measures for several different office and computer based task was investigated in this research paper. The two dynamic workstations assessed were the Lifespan Treadmill Desk and the RightAngle

  1. Evaluation of a PACS workstation for interpreting body CT studies

    International Nuclear Information System (INIS)

    Franken, E.A.; Berbaum, K.S.; Honda, H.; McGuire, C.; Weis, R.R.; Barloon, T.

    1989-01-01

    This paper reports conventional hard-copy images from 266 body CT studies compared with those provided by a picture archiving and communication system (PACS) workstation. PACS images were evaluated before and after use of various image processing features. Most cases were depicted equally well, but in about one-fourth of the cases, diagnostic features were shown more clearly on PACS images. When PACS images were viewed first, a change in diagnosis after subsequent hardcopy inspection was infrequent, but when hard-copy images were viewed first, the results were converse. The image processing features of PACS were critical for its superior performance. The ability of a PACS to provide both image display and manipulation results in the superiority of that system

  2. Advanced software development workstation project: Engineering scripting language. Graphical editor

    Science.gov (United States)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  3. From LESSEPS to the workstation for reliability engineers

    International Nuclear Information System (INIS)

    Ancelin, C.; Bouissou, M.; Collet, J.; Gallois, M.; Magne, L.; Villatte, N.; Yedid, C.; Mulet-Marquis, D.

    1994-01-01

    Three Mile Island and Chernobyl in the nuclear industry, Challenger, in the space industry, Seveso and Bhopal in the chemical industry - all these accidents show how difficult it is to forecast all likely accident scenarios that may occur in complex systems. This was, however, the objective of the probabilistic safety assessment (PSA) performed by EDF at the Paluel nuclear power plant. The full computerization of this study led to the LESSEPS project, aimed at automating three different steps: generation of reliability models -based on the use of expert systems, qualitative and quantitative processing of these models using computer codes, and overall management of PSA studies. This paper presents the results obtained and the gradual transformation of this first generation of tools into a workstation aimed at integrating reliability studies at all stages of an industrial process. (author)

  4. A cycling workstation to facilitate physical activity in office settings.

    Science.gov (United States)

    Elmer, Steven J; Martin, James C

    2014-07-01

    Facilitating physical activity during the workday may help desk-bound workers reduce risks associated with sedentary behavior. We 1) evaluated the efficacy of a cycling workstation to increase energy expenditure while performing a typing task and 2) fabricated a power measurement system to determine the accuracy and reliability of an exercise cycle. Ten individuals performed 10 min trials of sitting while typing (SIT type) and pedaling while typing (PED type). Expired gases were recorded and typing performance was assessed. Metabolic cost during PED type was ∼ 2.5 × greater compared to SIT type (255 ± 14 vs. 100 ± 11 kcal h(-1), P physical activity without compromising typing performance. The exercise cycle's inaccuracy could be misleading to users. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. ESCRIME: testing bench for advanced operator workstations in future plants

    International Nuclear Information System (INIS)

    Poujol, A.; Papin, B.

    1994-01-01

    The problem of optimal task allocation between man and computer for the operation of nuclear power plants is of major concern for the design of future plants. As the increased level of automation induces the modification of the tasks actually devoted to the operator in the control room, it is very important to anticipate these consequences at the plant design stage. The improvement of man machine cooperation is expected to play a major role in minimizing the impact of human errors on plant safety. The CEA has launched a research program concerning the evolution of the plant operation in order to optimize the efficiency of the human/computer systems for a better safety. The objective of this program is to evaluate different modalities of man-machine share of tasks, in a representative context. It relies strongly upon the development of a specific testing facility, the ESCRIME work bench, which is presented in this paper. It consists of an EDF 1300MWe PWR plant simulator connected to an operator workstation. The plant simulator model presents at a significant level of details the instrumentation and control of the plant and the main connected circuits. The operator interface is based on the generalization of the use of interactive graphic displays, and is intended to be consistent to the tasks to be performed by the operator. The functional architecture of the workstation is modular, so that different cooperation mechanisms can be implemented within the same framework. It is based on a thorough analysis and structuration of plant control tasks, in normal as well as in accident situations. The software architecture design follows the distributed artificial intelligence approach. Cognitive agents cooperate in order to operate the process. The paper presents the basic principles and the functional architecture of the test bed and describes the steps and the present status of the program. (author)

  6. The Impact of Active Workstations on Workplace Productivity and Performance: A Systematic Review.

    Science.gov (United States)

    Ojo, Samson O; Bailey, Daniel P; Chater, Angel M; Hewson, David J

    2018-02-27

    Active workstations have been recommended for reducing sedentary behavior in the workplace. It is important to understand if the use of these workstations has an impact on worker productivity. The aim of this systematic review was to examine the effect of active workstations on workplace productivity and performance. A total of 3303 articles were initially identified by a systematic search and seven articles met eligibility criteria for inclusion. A quality appraisal was conducted to assess risk of bias, confounding, internal and external validity, and reporting. Most of the studies reported cognitive performance as opposed to productivity. Five studies assessed cognitive performance during use of an active workstation, usually in a single session. Sit-stand desks had no detrimental effect on performance, however, some studies with treadmill and cycling workstations identified potential decreases in performance. Many of the studies lacked the power required to achieve statistical significance. Three studies assessed workplace productivity after prolonged use of an active workstation for between 12 and 52 weeks. These studies reported no significant effect on productivity. Active workstations do not appear to decrease workplace performance.

  7. The Impact of Active Workstations on Workplace Productivity and Performance: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Samson O. Ojo

    2018-02-01

    Full Text Available Active workstations have been recommended for reducing sedentary behavior in the workplace. It is important to understand if the use of these workstations has an impact on worker productivity. The aim of this systematic review was to examine the effect of active workstations on workplace productivity and performance. A total of 3303 articles were initially identified by a systematic search and seven articles met eligibility criteria for inclusion. A quality appraisal was conducted to assess risk of bias, confounding, internal and external validity, and reporting. Most of the studies reported cognitive performance as opposed to productivity. Five studies assessed cognitive performance during use of an active workstation, usually in a single session. Sit-stand desks had no detrimental effect on performance, however, some studies with treadmill and cycling workstations identified potential decreases in performance. Many of the studies lacked the power required to achieve statistical significance. Three studies assessed workplace productivity after prolonged use of an active workstation for between 12 and 52 weeks. These studies reported no significant effect on productivity. Active workstations do not appear to decrease workplace performance.

  8. The scheme and implementing of workstation configuration for medical imaging information system

    International Nuclear Information System (INIS)

    Tao Yonghao; Miao Jingtao

    2002-01-01

    Objective: To discuss the scheme and implementing for workstation configuration of medical imaging information system which would be adapted to the practice situation of China. Methods: The workstations were logically divided into PACS workstations and RIS workstations, the former applied to three kinds of diagnostic practice: the small matrix images, large matrix images, and high resolution gray scale display application, and the latter consisted of many different models which depended upon the usage and function process. Results: A dual screen configuration for image diagnostic workstation integrated the image viewing and reporting procedure physically, while the small matrix images as CT or MR were operated on 17 in (1 in = 2.54 cm) color monitors, the conventional X-ray diagnostic procedure was implemented based on 21 in color monitors or portrait format gray scale 2 K by 2.5 K monitors. All other RIS workstations not involved in image process were set up with a common PC configuration. Conclusion: The essential principle for designing a workstation scheme of medical imaging information system should satisfy the basic requirements of medical image diagnosis and fit into the available investment situation

  9. Feedwater heater performance evaluation using the heat exchanger workstation

    International Nuclear Information System (INIS)

    Ranganathan, K.M.; Singh, G.P.; Tsou, J.L.

    1995-01-01

    A Heat Exchanger Workstation (HEW) has been developed to monitor the condition of heat exchanging equipment power plants. HEW enables engineers to analyze thermal performance and failure events for power plant feedwater heaters. The software provides tools for heat balance calculation and performance analysis. It also contains an expert system that enables performance enhancement. The Operation and Maintenance (O ampersand M) reference module on CD-ROM for HEW will be available by the end of 1995. Future developments of HEW would result in Condenser Expert System (CONES) and Balance of Plant Expert System (BOPES). HEW consists of five tightly integrated applications: A Database system for heat exchanger data storage, a Diagrammer system for creating plant heat exchanger schematics and data display, a Performance Analyst system for analyzing and predicting heat exchanger performance, a Performance Advisor expert system for expertise on improving heat exchanger performance and a Water Calculator system for computing properties of steam and water. In this paper an analysis of a feedwater heater which has been off-line is used to demonstrate how HEW can analyze the performance of the feedwater heater train and provide an economic justification for either replacing or repairing the feedwater heater

  10. Intranet and Internet metrological workstation with photonic sensors and transmission

    Science.gov (United States)

    Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Dybko, Artur

    1999-05-01

    We describe in this paper a part of a telemetric network which consists of a workstation with photonic measurement and communication interfaces, structural fiber optic cabling (10/100BaseFX and CAN-FL), and photonic sensors with fiber optic interfaces. The station is equipped with direct photonic measurement interface and most common measuring standards converter (RS, GPIB) with fiber optic I/O CAN bus, O/E converters, LAN and modem ports. The station was connected to the Intranet (ipx/spx) and Internet (tcp/ip) with separate IP number and DNS, WINS names. Virtual measuring environment system program was written specially for such an Intranet and Internet station. The measurement system program communicated with the user via a Graphical User's Interface (GUI). The user has direct access to all functions of the measuring station system through appropriate layers of GUI: telemetric, transmission, visualization, processing, information, help and steering of the measuring system. We have carried out series of thorough simulation investigations and tests of the station using WWW subsystem of the Internet. We logged into the system through the LAN and via modem. The Internet metrological station works continuously under the address http://nms.ipe.pw.edu.pl/nms. The station and the system hear the short name NMS (from Network Measuring System).

  11. Methodological Aspects of Modelling and Simulation of Robotized Workstations

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2018-05-01

    Full Text Available From the point of view of development of application and program products, key directions that need to be respected in computer support for project activities are quite clearly specified. User interfaces with a high degree of graphical interactive convenience, two-dimensional and three-dimensional computer graphics contribute greatly to streamlining project methodologies and procedures in particular. This is mainly due to the fact that a high number of solved tasks is clearly graphic in the modern design of robotic systems. Automation of graphical character tasks is therefore a significant development direction for the subject area. The authors present results of their research in the area of automation and computer-aided design of robotized systems. A new methodical approach to modelling robotic workstations, consisting of ten steps incorporated into the four phases of the logistics process of creating and implementing a robotic workplace, is presented. The emphasis is placed on the modelling and simulation phase with verification of elaborated methodologies on specific projects or elements of the robotized welding plant in automotive production.

  12. Workstation computer systems for in-core fuel management

    International Nuclear Information System (INIS)

    Ciccone, L.; Casadei, A.L.

    1992-01-01

    The advancement of powerful engineering workstations has made it possible to have thermal-hydraulics and accident analysis computer programs operating efficiently with a significant performance/cost ratio compared to large mainframe computer. Today, nuclear utilities are acquiring independent engineering analysis capability for fuel management and safety analyses. Computer systems currently available to utility organizations vary widely thus requiring that this software be operational on a number of computer platforms. Recognizing these trends Westinghouse adopted a software development life cycle process for the software development activities which strictly controls the development, testing and qualification of design computer codes. In addition, software standards to ensure maximum portability were developed and implemented, including adherence to FORTRAN 77, and use of uniform system interface and auxiliary routines. A comprehensive test matrix was developed for each computer program to ensure that evolution of code versions preserves the licensing basis. In addition, the results of such test matrices establish the Quality Assurance basis and consistency for the same software operating on different computer platforms. (author). 4 figs

  13. Integrated model for line balancing with workstation inventory management

    Directory of Open Access Journals (Sweden)

    Dilip Roy

    2010-06-01

    Full Text Available In this paper, we address the optimization of an integrated line balancing process with workstation inventory management. While doing so, we have studied the interconnection between line balancing and its conversion process. Almost each and every moderate to large manufacturing industry depends on a long and integrated supply chain, consisting of inbound logistic, conversion process and outbound logistic. In this sense an approach addresses a very general problem of integrated line balancing. Research works reported in the literature so far mainly deals with minimization of cost for inbound and outbound logistic subsystems. In most of the cases conversion process has been ignored. We suggest a generic approach for linking the balancing of the line of production in the conversion area with the customers’ rate of demand in the market and for configuring the related stock chambers. Thus, the main aim of this paper is to translate the underlying problem in the form of mixed nonlinear programming problem and design the optimum supply chain so that the total inventory cost and the cost of balancing loss of the conversion process is jointly minimized and ideal cycle time of the production process is determined along with ideal sizes of the stock chambers. A numerical example has been added to demonstrate the suitability of our approach.

  14. Interaction techniques for radiology workstations: impact on users' productivity

    Science.gov (United States)

    Moise, Adrian; Atkins, M. Stella

    2004-04-01

    As radiologists progress from reading images presented on film to modern computer systems with images presented on high-resolution displays, many new problems arise. Although the digital medium has many advantages, the radiologist"s job becomes cluttered with many new tasks related to image manipulation. This paper presents our solution for supporting radiologists" interpretation of digital images by automating image presentation during sequential interpretation steps. Our method supports scenario based interpretation, which group data temporally, according to the mental paradigm of the physician. We extended current hanging protocols with support for "stages". A stage reflects the presentation of digital information required to complete a single step within a complex task. We demonstrated the benefits of staging in a user study with 20 lay subjects involved in a visual conjunctive search for targets, similar to a radiology task of identifying anatomical abnormalities. We designed a task and a set of stimuli which allowed us to simulate the interpretation workflow from a typical radiology scenario - reading a chest computed radiography exam when a prior study is also available. The simulation was possible by abstracting the radiologist"s task and the basic workstation navigation functionality. We introduced "Stages," an interaction technique attuned to the radiologist"s interpretation task. Compared to the traditional user interface, Stages generated a 14% reduction in the average interpretation.

  15. An Iterative Approach To Development Of A PACS Display Workstation

    Science.gov (United States)

    O'Malley, Kathleen G.

    1989-05-01

    An iterative prototyping approach has been used in the development of requirements for a new user interface for the display workstation in the CommView system product line. This approach involves many steps, including development of the preliminary concept, validation and ranking of ideas within that concept, prototyping, evaluating, and revising. We describe in this paper the process undertaken to design and evaluate the new user interface. Staff at Abbott Northwestern Hospital, Bowman Gray/Baptist Hospital Medical Center, Duke University Medical Center, Georgetown University Medical Center and Robert Wood Johnson University Hospital participated in various aspects of the study. The subject population included radiologists, residents, technologists and staff physicians from several areas in the hospitals. Subjects participated in in-depth interviews, answered questionnaires, and performed specific tasks, to aid our development process. We feel this method has resulted in a product that will achieve a high level of customer satisfaction, developed in less time than a traditional approach. Some of the reasons we believe in the value of this approach are: • Users may not be able to describe their needs in terms that designers are expecting, leading to misinterpretation; • Users may not be able to choose between options without seeing them; • Users needs and choices evolve with experience; • Users true choices and needs may not seem logical to one not performing those tasks (i.e., the designers).

  16. A high-speed linear algebra library with automatic parallelism

    Science.gov (United States)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  17. Robotic, MEMS-based Multi Utility Sample Preparation Instrument for ISS Biological Workstation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project will develop a multi-functional, automated sample preparation instrument for biological wet-lab workstations on the ISS. The instrument is based on a...

  18. Sled Tests Using the Hybrid III Rail Safety ATD and Workstation Tables for Passenger Trains

    Science.gov (United States)

    2017-08-01

    The Hybrid III Rail Safety (H3-RS) anthropomorphic test device (ATD) is a crash test dummy developed in the United Kingdom to evaluate abdomen and lower thorax injuries that occur when passengers impact workstation tables during train accidents. The ...

  19. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    Science.gov (United States)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  20. Migration of nuclear criticality safety software from a mainframe to a workstation environment

    International Nuclear Information System (INIS)

    Bowie, L.J.; Robinson, R.C.; Cain, V.R.

    1993-01-01

    The Nuclear Criticality Safety Department (NCSD), Oak Ridge Y-12 Plant has undergone the transition of executing the Martin Marietta Energy Systems Nuclear Criticality Safety Software (NCSS) on IBM mainframes to a Hewlett-Packard (HP) 9000/730 workstation (NCSSHP). NCSSHP contains the following configuration controlled modules and cross-section libraries: BONAMI, CSAS, GEOMCHY, ICE, KENO IV, KENO Va, MODIIFY, NITAWL SCALE, SLTBLIB, XSDRN, UNIXLIB, albedos library, weights library, 16-Group HANSEN-ROACH master library, 27-Group ENDF/B-IV master library, and standard composition library. This paper will discuss the method used to choose the workstation, the hardware setup of the chosen workstation, an overview of Y-12 software quality assurance and configuration control methodology, code validation, difficulties encountered in migrating the codes, and advantages to migrating to a workstation environment

  1. Design and analysis of wudu’ (ablution) workstation for elderly in Malaysia

    Science.gov (United States)

    Aman, A.; Dawal, S. Z. M.; Rahman, N. I. A.

    2017-06-01

    Wudu’ (Ablution) workstation is one of the facilities used by most Muslims in all categories. At present, there are numbers of design guidelines for praying facilities but still lacking on wudu’ (ablution) area specification especially or elderly. Thus, It is timely to develop an ergonomic wudu’ workstation for elderly to perform ablution independently and confidently. This study was conducted to design an ergonomic ablution unit for the Muslim’s elderly in Malaysia. An ablution workstation was designed based on elderly anthropometric dimensions and was then analyse using CATIA V5R21 for posture investigation using RULAs. The results of the study has identified significant anthropometric dimensions in designing wudu’ (ablution) workstation for elderly people. This study can be considered as preliminary study for the development of an ergonomic ablution design for elderly. This effort will become one of the significant social contributions to our elderly population in developing our nation holistically.

  2. Ergonomics standards and guidelines for computer workstation design and the impact on users' health - a review.

    Science.gov (United States)

    Woo, E H C; White, P; Lai, C W K

    2016-03-01

    This paper presents an overview of global ergonomics standards and guidelines for design of computer workstations, with particular focus on their inconsistency and associated health risk impact. Overall, considerable disagreements were found in the design specifications of computer workstations globally, particularly in relation to the results from previous ergonomics research and the outcomes from current ergonomics standards and guidelines. To cope with the rapid advancement in computer technology, this article provides justifications and suggestions for modifications in the current ergonomics standards and guidelines for the design of computer workstations. Practitioner Summary: A research gap exists in ergonomics standards and guidelines for computer workstations. We explore the validity and generalisability of ergonomics recommendations by comparing previous ergonomics research through to recommendations and outcomes from current ergonomics standards and guidelines.

  3. A real-time data-acquisition and analysis system with distributed UNIX workstations

    International Nuclear Information System (INIS)

    Yamashita, H.; Miyamoto, K.; Maruyama, K.; Hirosawa, H.; Nakayoshi, K.; Emura, T.; Sumi, Y.

    1996-01-01

    A compact data-acquisition system using three RISC/UNIX TM workstations (SUN TM /SPARCstation TM ) with real-time capabilities of monitoring and analysis has been developed for the study of photonuclear reactions with the large-acceptance spectrometer TAGX. One workstation acquires data from memory modules in the front-end electronics (CAMAC and TKO) with a maximum speed of 300 Kbytes/s, where data size times instantaneous rate is 1 Kbyte x 300 Hz. Another workstation, which has real-time capability for run monitoring, gets the data with a buffer manager called NOVA. The third workstation analyzes the data and reconstructs the event. In addition to a general hardware and software description, priority settings and run control by shell scripts are described. This system has recently been used successfully in a two month long experiment. (orig.)

  4. Treadmill workstations: the effects of walking while working on physical activity and work performance.

    Science.gov (United States)

    Ben-Ner, Avner; Hamann, Darla J; Koepp, Gabriel; Manohar, Chimnay U; Levine, James

    2014-01-01

    We conducted a 12-month-long experiment in a financial services company to study how the availability of treadmill workstations affects employees' physical activity and work performance. We enlisted sedentary volunteers, half of whom received treadmill workstations during the first two months of the study and the rest in the seventh month of the study. Participants could operate the treadmills at speeds of 0-2 mph and could use a standard chair-desk arrangement at will. (a) Weekly online performance surveys were administered to participants and their supervisors, as well as to all other sedentary employees and their supervisors. Using within-person statistical analyses, we find that overall work performance, quality and quantity of performance, and interactions with coworkers improved as a result of adoption of treadmill workstations. (b) Participants were outfitted with accelerometers at the start of the study. We find that daily total physical activity increased as a result of the adoption of treadmill workstations.

  5. The specification of Stampi, a message passing library for distributed parallel computing

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Takemiya, Hiroshi; Koide, Hiroshi

    2000-03-01

    At CCSE, Center for Promotion of Computational Science and Engineering, a new message passing library for heterogeneous and distributed parallel computing has been developed, and it is called as Stampi. Stampi enables us to communicate between any combination of parallel computers as well as workstations. Currently, a Stampi system is constructed from Stampi library and Stampi/Java. It provides functions to connect a Stampi application with not only those on COMPACS, COMplex Parallel Computer System, but also applets which work on WWW browsers. This report summarizes the specifications of Stampi and details the development of its system. (author)

  6. A High-Performance Parallel FDTD Method Enhanced by Using SSE Instruction Set

    Directory of Open Access Journals (Sweden)

    Dau-Chyrh Chang

    2012-01-01

    Full Text Available We introduce a hardware acceleration technique for the parallel finite difference time domain (FDTD method using the SSE (streaming (single instruction multiple data SIMD extensions instruction set. The implementation of SSE instruction set to parallel FDTD method has achieved the significant improvement on the simulation performance. The benchmarks of the SSE acceleration on both the multi-CPU workstation and computer cluster have demonstrated the advantages of (vector arithmetic logic unit VALU acceleration over GPU acceleration. Several engineering applications are employed to demonstrate the performance of parallel FDTD method enhanced by SSE instruction set.

  7. Treatment planning in radiosurgery: parallel Monte Carlo simulation software

    Energy Technology Data Exchange (ETDEWEB)

    Scielzo, G [Galliera Hospitals, Genova (Italy). Dept. of Hospital Physics; Grillo Ruggieri, F [Galliera Hospitals, Genova (Italy) Dept. for Radiation Therapy; Modesti, M; Felici, R [Electronic Data System, Rome (Italy); Surridge, M [University of South Hampton (United Kingdom). Parallel Apllication Centre

    1995-12-01

    The main objective of this research was to evaluate the possibility of direct Monte Carlo simulation for accurate dosimetry with short computation time. We made us of: graphics workstation, linear accelerator, water, PMMA and anthropomorphic phantoms, for validation purposes; ionometric, film and thermo-luminescent techniques, for dosimetry; treatment planning system for comparison. Benchmarking results suggest that short computing times can be obtained with use of the parallel version of EGS4 that was developed. Parallelism was obtained assigning simulation incident photons to separate processors, and the development of a parallel random number generator was necessary. Validation consisted in: phantom irradiation, comparison of predicted and measured values good agreement in PDD and dose profiles. Experiments on anthropomorphic phantoms (with inhomogeneities) were carried out, and these values are being compared with results obtained with the conventional treatment planning system.

  8. Parallel computers and three-dimensional computational electromagnetics

    International Nuclear Information System (INIS)

    Madsen, N.K.

    1994-01-01

    The authors have continued to enhance their ability to use new massively parallel processing computers to solve time-domain electromagnetic problems. New vectorization techniques have improved the performance of their code DSI3D by factors of 5 to 15, depending on the computer used. New radiation boundary conditions and far-field transformations now allow the computation of radar cross-section values for complex objects. A new parallel-data extraction code has been developed that allows the extraction of data subsets from large problems, which have been run on parallel computers, for subsequent post-processing on workstations with enhanced graphics capabilities. A new charged-particle-pushing version of DSI3D is under development. Finally, DSI3D has become a focal point for several new Cooperative Research and Development Agreement activities with industrial companies such as Lockheed Advanced Development Company, Varian, Hughes Electron Dynamics Division, General Atomic, and Cray

  9. Xyce parallel electronic simulator : reference guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

  10. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    Science.gov (United States)

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  11. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  12. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  13. The Atmospheric Release Advisory Capability Site Workstation System

    International Nuclear Information System (INIS)

    Foster, K.T.; Sumikawa, D.A.; Foster, C.S.; Baskett, R.L.

    1993-01-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response service that assesses the consequences that may result from an atmospheric release of toxic material. ARAC was developed by the Lawrence Livermore National Laboratory (LLNL) for the Departments of Energy (DOE) and Defense (DOD) and responds principally to radiological accidents. ARAC provides radiological health and safety guidance to decision makers in the form of computer-generated estimates of the effects of an actual, or potential release of radioactive material into the atmosphere. Upon receipt of the release scenario, the ARAC assessment staff extracts meteorological, topographic, and geographic data from resident world-wide databases for use in complex, three-dimensional transport and diffusion models. These dispersion models generate air concentration (or dose) and ground deposition contour plots showing estimates of the contamination patterns produced as the toxic material is carried by the prevailing winds. To facilitate the ARAC response to a release from specific DOE and DOD sites and to provide these sites with a local emergency response tool, a remote Site Workstation System (SWS) is being placed at various ARAC-supported facilities across the country.. This SWS replaces the existing antiquated ARAC Site System now installed at many of these sites. The new system gives users access to complex atmospheric dispersion models that may be run either by the ARAC staff at LLNL, or (in a later phase of the system) by site personnel using the computational resources of the SWS. Supporting this primary function are a variety of SWS-resident supplemental capabilities that include meteorological data acquisition, manipulation of release-specific databases, computer-based communications, and the use of a simpler Gaussian trajectory puff model that is based on Environmental Protection Agency's INPUFF code

  14. Advanced human machine interaction for an image interpretation workstation

    Science.gov (United States)

    Maier, S.; Martin, M.; van de Camp, F.; Peinsipp-Byma, E.; Beyerer, J.

    2016-05-01

    In recent years, many new interaction technologies have been developed that enhance the usability of computer systems and allow for novel types of interaction. The areas of application for these technologies have mostly been in gaming and entertainment. However, in professional environments, there are especially demanding tasks that would greatly benefit from improved human machine interfaces as well as an overall improved user experience. We, therefore, envisioned and built an image-interpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a complex software product such as a geo-information system to provide geographic context, an image annotation tool, software to generate standardized reports and a tool to aid in the identification of objects. Using self-developed systems for hand tracking, pointing gestures and head pose estimation in addition to touchscreens, face identification, and speech recognition systems we created a novel approach to this complex task. For example, head pose information is used to save the position of the mouse cursor on the currently focused screen and to restore it as soon as the same screen is focused again while hand gestures allow for intuitive manipulation of 3d objects in mid-air. While the primary focus is on the task of image interpretation, all of the technologies involved provide generic ways of efficiently interacting with a multi-screen setup and could be utilized in other fields as well. In preliminary experiments, we received promising feedback from users in the military and started to tailor the functionality to their needs

  15. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  16. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  17. Parallel processing of two-dimensional Sn transport calculations

    International Nuclear Information System (INIS)

    Uematsu, M.

    1997-01-01

    A parallel processing method for the two-dimensional S n transport code DOT3.5 has been developed to achieve a drastic reduction in computation time. In the proposed method, parallelization is achieved with angular domain decomposition and/or space domain decomposition. The calculational speed of parallel processing by angular domain decomposition is largely influenced by frequent communications between processing elements. To assess parallelization efficiency, sample problems with up to 32 x 32 spatial meshes were solved with a Sun workstation using the PVM message-passing library. As a result, parallel calculation using 16 processing elements, for example, was found to be nine times as fast as that with one processing element. As for parallel processing by geometry segmentation, the influence of processing element communications on computation time is small; however, discontinuity at the segment boundary degrades convergence speed. To accelerate the convergence, an alternate sweep of angular flux in conjunction with space domain decomposition and a two-step rescaling method consisting of segmentwise rescaling and ordinary pointwise rescaling have been developed. By applying the developed method, the number of iterations needed to obtain a converged flux solution was reduced by a factor of 2. As a result, parallel calculation using 16 processing elements was found to be 5.98 times as fast as the original DOT3.5 calculation

  18. The use of bicycle workstations to increase physical activity in secondary classrooms

    Directory of Open Access Journals (Sweden)

    Alicia Fedewa

    2017-11-01

    Full Text Available Background To date, the majority of interventions have implemented classroom-based physical activity (PA at the elementary level; however, there is both the potential and need to explore student outcomes at high-school level as well, given that very few studies have incorporated classroom-based PA interventions for adolescents. One exception has been the use of bicycle workstations within secondary classrooms. Using bicycle workstations in lieu of traditional chairs in a high school setting shows promise for enhancing adolescents’ physical activity during the school day. Participants and procedure The present study explored the effects of integrating bicycle workstations into a secondary classroom setting for four months in a sample of 115 adolescents using an A-B-A-B withdrawal design. The study took place in one Advanced Placement English classroom across five groups of students. Physical activity outcomes included average heart rate, and caloric expenditure. Behavioural outcomes included percentage of on-task/off-task behaviour and number of teacher prompts in redirecting off-task behaviour. Feasibility and acceptability data of using the bicycle workstations were also collected. Results Findings showed significant improvements in physical activity as measured by heart rate and caloric expenditure, although heart rate percentage remained in the low intensity range when students were on the bicycle workstations. No effects were found on students’ on-task behaviour when using the bicycle workstations. Overall, students found the bikes acceptable to use but noted disadvantages of them as well. Conclusions Using bicycle workstations in high-school settings appears promising for enhancing low-intensity physical activity among adolescents. The limitations of the present study and implications for physical activity interventions in secondary schools are discussed.

  19. Shoulder girdle muscle activity and fatigue in traditional and improved design carpet weaving workstations.

    Science.gov (United States)

    Allahyari, Teimour; Mortazavi, Narges; Khalkhali, Hamid Reza; Sanjari, Mohammad Ali

    2016-01-01

    Work-related musculoskeletal disorders in the neck and shoulder regions are common among carpet weavers. Working for prolonged hours in a static and awkward posture could result in an increased muscle activity and may lead to musculoskeletal disorders. Ergonomic workstation improvements can reduce muscle fatigue and the risk of musculoskeletal disorders. The aim of this study is to assess and to compare upper trapezius and middle deltoid muscle activity in 2 traditional and improved design carpet weaving workstations. These 2 workstations were simulated in a laboratory and 12 women carpet weavers worked for 3 h. Electromyography (EMG) signals were recorded during work in bilateral upper trapezius and bilateral middle deltoid. The root mean square (RMS) and median frequency (MF) values were calculated and used to assess muscle load and fatigue. Repeated measure ANOVA was performed to assess the effect of independent variables on muscular activity and fatigue. The participants were asked to report shoulder region fatigue on the Borg's Category-Ratio scale (Borg CR-10). Root mean square values in workstation A are significantly higher than in workstation B. Furthermore, EMG amplitude was higher in bilateral trapezius than in bilateral deltoid. However, muscle fatigue was not observed in any of the workstations. The results of the study revealed that muscle load in a traditional workstation was high, but fatigue was not observed. Further studies investigating other muscles involved in carpet weaving tasks are recommended. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  20. Posture And Dorsal Shape At A Sitted Workstation

    Science.gov (United States)

    Lepoutre, F. X.; Cloup, P.; Guerra, T. M.

    1986-07-01

    The ergonomic analysis of a control or a supervision workstation for a vehicle or a process, necessitates to take into account the biomecanical visuo-postural system. The measurements, which are necessary to do, must give informations about the spatial direction of the limbs, the dorsal shape, eventually the eyes direction, and the postural evolution during the working time. More, the smallness of the work station, the backrest and sometime a vibratory environment made use specific, strong and small devices wich do not disturb the operator. The measurement system which we propose is made of an optical device. This system is studied in relation with the french "Institute de Recherche pour les Transports" for an ergonomic analysis of a truck cabin. The optical device consists on placing on the body of the driver on particular places materializing specially members and trunck joint points, some drops which reflect the infra-red raies coming from a specific light. Several cameras whose relative positions depend on the experiment site, transmit video signals to the associated treatment systems which extract the coordinates (Xi, Yi) of each drop in the observation scope of any camera. By regrouping the informations obtained from every view, it is possible to obtain the spatial drop position and then to restore the individual's posture in three dimensions. Therefore, this device doesn't enable us, in consideration of the backrest, to analyse the dorsal posture, which is important with regard to dorsal pains frequency. For that reason, we complete the measurements by using a "curvometer". This device consists of a flexible stick fixed upon the individual back with elastic belts, whose distorsions (curvature in m-1) are measured, in the individual's sagittal plane, with 4 strain gauges pairs; located approximately at the level of vertebra D1, D6, D10 and L3. A fifth measurement, concerning the inclination (in degree) of the lower part of the stick, makes it is possible to

  1. UWGSP7: a real-time optical imaging workstation

    Science.gov (United States)

    Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.

    1995-04-01

    With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board

  2. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  3. Out-of-core nuclear fuel cycle optimization utilizing an engineering workstation

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Comes, S.A.

    1986-01-01

    Within the past several years, rapid advances in computer technology have resulted in substantial increases in their performance. The net effect is that problems that could previously only be executed on mainframe computers can now be executed on micro- and minicomputers. The authors are interested in developing an engineering workstation for nuclear fuel management applications. An engineering workstation is defined as a microcomputer with enhanced graphics and communication capabilities. Current fuel management applications range from using workstations as front-end/back-end processors for mainframe computers to completing fuel management scoping calculations. More recently, interest in using workstations for final in-core design calculations has appeared. The authors have used the VAX 11/750 minicomputer, which is not truly an engineering workstation but has comparable performance, to complete both in-core and out-of-core fuel management scoping studies. In this paper, the authors concentrate on our out-of-core research. While much previous work in this area has dealt with decisions concerned with equilibrium cycles, the current project addresses the more realistic situation of nonequilibrium cycles

  4. A PC/workstation cluster computing environment for reservoir engineering simulation applications

    International Nuclear Information System (INIS)

    Hermes, C.E.; Koo, J.

    1995-01-01

    Like the rest of the petroleum industry, Texaco has been transferring its applications and databases from mainframes to PC's and workstations. This transition has been very positive because it provides an environment for integrating applications, increases end-user productivity, and in general reduces overall computing costs. On the down side, the transition typically results in a dramatic increase in workstation purchases and raises concerns regarding the cost and effective management of computing resources in this new environment. The workstation transition also places the user in a Unix computing environment which, to say the least, can be quite frustrating to learn and to use. This paper describes the approach, philosophy, architecture, and current status of the new reservoir engineering/simulation computing environment developed at Texaco's E and P Technology Dept. (EPTD) in Houston. The environment is representative of those under development at several other large oil companies and is based on a cluster of IBM and Silicon Graphics Intl. (SGI) workstations connected by a fiber-optics communications network and engineering PC's connected to local area networks, or Ethernets. Because computing resources and software licenses are shared among a group of users, the new environment enables the company to get more out of its investments in workstation hardware and software

  5. Real-time monitoring/emergency response modeling workstation for a tritium facility

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sims, J.M.; Baskett, R.L.

    1993-01-01

    At Lawrence Livermore National Laboratory (LLNL) we have developed a real-time system to monitor two stacks on our tritium handling facility. The monitors transmit the stack data to a workstation, which computes a three-dimensional numerical model of atmospheric dispersion. The workstation also collects surface and upper air data from meteorological towers and a sodar. The complex meteorological and terrain setting in the Livermore Valley demands more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion than afforded by Gaussian models. We experience both mountain valley and sea breeze flows. To address these complexities, we have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on the workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 yr within the U.S. Department of Energy's Atmospheric Release Advisory Capability (ARAC) project. Faster workstations and real-time instruments allow utilization of more complex three-dimensional models, which provides a foundation for building a real-time monitoring and emergency response workstation for a tritium facility. The stack monitors are two ion chambers per stack

  6. System engineering workstations - critical tool in addressing waste storage, transportation, or disposal

    International Nuclear Information System (INIS)

    Mar, B.W.

    1987-01-01

    The ability to create, evaluate, operate, and manage waste storage, transportation, and disposal systems (WSTDSs) is greatly enhanced when automated tools are available to support the generation of the voluminous mass of documents and data associated with the system engineering of the program. A system engineering workstation is an optimized set of hardware and software that provides such automated tools to those performing system engineering functions. This paper explores the functions that need to be performed by a WSTDS system engineering workstation. While the latter stages of a major WSTDS may require a mainframe computer and specialized software systems, most of the required system engineering functions can be supported by a system engineering workstation consisting of a personnel computer and commercial software. These findings suggest system engineering workstations for WSTDS applications will cost less than $5000 per unit, and the payback on the investment can be realized in a few months. In most cases the major cost element is not the capital costs of hardware or software, but the cost to train or retrain the system engineers in the use of the workstation and to ensure that the system engineering functions are properly conducted

  7. The impact of sit-stand office workstations on worker discomfort and productivity: a review.

    Science.gov (United States)

    Karakolis, Thomas; Callaghan, Jack P

    2014-05-01

    This review examines the effectiveness of sit-stand workstations at reducing worker discomfort without causing a decrease in productivity. Four databases were searched for studies on sit-stand workstations, and five selection criteria were used to identify appropriate articles. Fourteen articles were identified that met at least three of the five selection criteria. Seven of the identified studies reported either local, whole body or both local and whole body subjective discomfort scores. Six of these studies indicated implementing sit-stand workstations in an office environment led to lower levels of reported subjective discomfort (three of which were statistically significant). Therefore, this review concluded that sit-stand workstations are likely effective in reducing perceived discomfort. Eight of the identified studies reported a productivity outcome. Three of these studies reported an increase in productivity during sit-stand work, four reported no affect on productivity, and one reported mixed productivity results. Therefore, this review concluded that sit-stand workstations do not cause a decrease in productivity. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  8. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  9. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  10. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  11. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  12. Workstations as consoles for the CERN-PS complex, setting-up the environment

    International Nuclear Information System (INIS)

    Antonsanti, P.; Arruat, M.; Bouche, J.M.; Cons, L.; Deloose, Y.; Di Maio, F.

    1992-01-01

    Within the framework of the rejuvenation project of the CERN control systems, commercial workstations have to replace existing home-designed operator consoles. RISC-based workstations with UNIX, X-window TM and OSF/Motif TM have been introduced for the control of the PS complex. The first versions of general functionalities like synoptic display, program selection and control panels have been implemented and the first large scale application has been realized. This paper describes the different components of the workstation environment for the implementation of the applications. The focus is on the set of tools which have been used, developed or integrated, and on how we plan to make them evolve. (author)

  13. JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS

    Science.gov (United States)

    Smith, B.

    1994-01-01

    human figure in an environment. Integrated into JACK is a set of vision tools that allow predictions about visibility and legibility. The program is capable of displaying environment perspectives corresponding to what the mannequin would see while in the environment, indicating potential problems with occlusion and visibility. It is also possible to display view cones emanating from the figure's eyes, indicating field of view. Another feature projects the environment onto retina coordinates which gives clues regarding visual angles, acuity and occlusion by the biological blind spots. A retina editor makes it possible to draw onto the retina and project that into 3-dimensional space. Another facility, Reach, causes the mannequin to move a specific portion of its anatomy to a chosen point in space. The Reach facility helps in analyzing problems associated with operator size and other constraints. The 17-segment torso makes it possible to set a figure into realistic postures, simulating human postures closely. The JACK application software is written in C-language for Silicon Graphics workstations running IRIX versions 4.0.5 or higher and is available only in executable form. Since JACK is a copyrighted program (copyright 1991 University of Pennsylvania), this executable may not be redistributed. The recommended minimum hardware configuration for running the executable includes a floating-point accelerator, an 8-megabyte program memory, a high resolution (1280 x 1024) graphics card, and at least 50Mb of free disk space. JACK's data files take up millions of bytes of storage space, so additional disk space is highly recommended. The standard distribution medium for JACK is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. JACK was originally developed in 1988. Jack v4.8 was released for distribution through COSMIC in 1993.

  14. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  15. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  16. Viewport: an object-oriented approach to integrate workstation software for tile and stack mode display.

    Science.gov (United States)

    Ghosh, S; Andriole, K P; Avrin, D E

    1997-08-01

    Diagnostic workstation design has migrated towards display presentation in one of two modes: tiled images or stacked images. It is our impression that the workstation setup or configuration in each of these two modes is rather distinct. We sought to establish a commonality to simplify software design, and to enable a single descriptor method to facilitate folder manager development of "hanging" protocols. All current workstation designs use a combination of "off-screen" and "on-screen" memory whether or not they use a dedicated display subsystem, or merely a video board. Most diagnostic workstations also have two or more monitors. Our central concept is that of a "logical" viewport that can be smaller than, the same size as, or larger than a single monitor. Each port "views" an image data sequence loaded into offscreen memory. Each viewport can display one or more images in sequence in a one-on-one or traditionally tiled presentation. Viewports can be assigned to the available monitor "real estate" in any manner that fits. For example, a single sequence computed tomography (CT) study could be displayed across all monitors in a tiled appearance by assigning a single large viewport to the monitors. At the other extreme, a multisequence magnetic resonance (MR) study could be compared with a similar previous study by assigning four viewports to each monitor, single image display per viewport, and assigning four of the sequences of the current study to the left monitor viewports, and four of the earlier study to the right monitor viewports. Ergonomic controls activate scrolling through the off-screen image sequence data. Workstation folder manager hanging protocols could then specify viewports, number of images per viewport, and the automatic assignment of appropriately named sequences of current and previous studies to the viewports on a radiologist-specific basis. Furthermore, software development is simplified by common base objects and methods of the tile and stack

  17. Parallel performance of the angular versus spatial domain decomposition for discrete ordinates transport methods

    International Nuclear Information System (INIS)

    Fischer, J.W.; Azmy, Y.Y.

    2003-01-01

    A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of

  18. Stampi: a message passing library for distributed parallel computing. User's guide

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Koide, Hiroshi; Takemiya, Hiroshi

    1998-11-01

    A new message passing library, Stampi, has been developed to realize a computation with different kind of parallel computers arbitrarily and making MPI (Message Passing Interface) as an unique interface for communication. Stampi is based on MPI2 specification. It realizes dynamic process creation to different machines and communication between spawned one within the scope of MPI semantics. Vender implemented MPI as a closed system in one parallel machine and did not support both functions; process creation and communication to external machines. Stampi supports both functions and enables us distributed parallel computing. Currently Stampi has been implemented on COMPACS (COMplex PArallel Computer System) introduced in CCSE, five parallel computers and one graphic workstation, and any communication on them can be processed on. (author)

  19. A workstation based spectrometry application for ECR ion source [Paper No.: G5

    International Nuclear Information System (INIS)

    Suresh Babu, R.M.; . PS Div.)

    1993-01-01

    A program for an Electron Cyclotron Resonance (ECR) Ion Source beam diagnostics application in a X-Windows/Motif based workstation environment is discussed. The application program controls the hardware and acquires data via a front end computer across a local area network. The data is subsequently processed for displaying on the workstation console. The timing for data acquisition and control is determined by the particle source timing. The user interface has been implemented using the Motif widget set and the actions have been implemented through call back routines. The equipment interface is through a set of database driven calls across the network. (author). 7 refs., 1 fig

  20. BioPhotonics Workstation: 3D interactive manipulation, observation and characterization

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    2011-01-01

    In ppo.dk we have invented the BioPhotonics Workstation to be applied in 3D research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and new materials.......In ppo.dk we have invented the BioPhotonics Workstation to be applied in 3D research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and new materials....

  1. Analysis on the influence of supply method on a workstation with the help of dynamic simulation

    Directory of Open Access Journals (Sweden)

    Gavriluță Alin

    2017-01-01

    Full Text Available Considering the need of flexibility in any manufacturing process, the choice of the supply method of an assembly workstation can be a decision with instead influence on its performances. Using dynamic simulation, this article wants to compare the effect on a workstation cycle time of three different supply methods: supply on stock, supply in “Strike Zone” and synchronous supply. This study is part of an extended work that has the aim of compering by 3D layout design and dynamic simulation, different supply methods on an assembly line performances.

  2. Stereotactic biopsy aided by a computer graphics workstation: experience with 200 consecutive cases.

    Science.gov (United States)

    Ulm, A J; Bova, F J; Friedman, W A

    2001-12-01

    The advent of modern computer technology has made it possible to examine not just the target point, but the entire trajectory in planning for stereotactic biopsies. Two hundred consecutive biopsies were performed by one surgeon, utilizing a computer graphics workstation. The target point, entry point, and complete trajectory were carefully scrutinized and adjusted to minimize potential complications. Pathologically abnormal tissue was obtained in 197 cases (98.5%). There was no mortality in this series. Symptomatic hemorrhages occurred in 4 cases (2%). Computer graphics workstations facilitate safe and effective biopsies in virtually any brain area.

  3. Parallel computation of automatic differentiation applied to magnetic field calculations

    International Nuclear Information System (INIS)

    Hinkins, R.L.; Lawrence Berkeley Lab., CA

    1994-09-01

    The author presents a parallelization of an accelerator physics application to simulate magnetic field in three dimensions. The problem involves the evaluation of high order derivatives with respect to two variables of a multivariate function. Automatic differentiation software had been used with some success, but the computation time was prohibitive. The implementation runs on several platforms, including a network of workstations using PVM, a MasPar using MPFortran, and a CM-5 using CMFortran. A careful examination of the code led to several optimizations that improved its serial performance by a factor of 8.7. The parallelization produced further improvements, especially on the MasPar with a speedup factor of 620. As a result a problem that took six days on a SPARC 10/41 now runs in minutes on the MasPar, making it feasible for physicists at Lawrence Berkeley Laboratory to simulate larger magnets

  4. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  5. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  6. General specifications for the development of a USL/DBMS NASA/PC R and D distributed workstation

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Chum, Frank Y.

    1984-01-01

    The general specifications for the development of a PC-based distributed workstation (PCDWS) for an information storage and retrieval systems environment are defined. This research proposes the development of a PCDWS prototype as part of the University of Southwestern Louisiana Data Base Management System (USL/DBMS) NASA/PC R and D project in the PC-based workstation environment.

  7. Development of parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Sigmar, D.J.; Koniges, A.E.

    1996-01-01

    We report on our ongoing development of the 3D Fokker-Planck code ALLA for a highly collisional scrape-off-layer (SOL) plasma. A SOL with strong gradients of density and temperature in the spatial dimension is modeled. Our method is based on a 3-D adaptive grid (in space, magnitude of the velocity, and cosine of the pitch angle) and a second order conservative scheme. Note that the grid size is typically 100 x 257 x 65 nodes. It was shown in our previous work that only these capabilities make it possible to benchmark a 3D code against a spatially-dependent self-similar solution of a kinetic equation with the Landau collision term. In the present work we show results of a more precise benchmarking against the exact solutions of the kinetic equation using a new parallel code ALLAp with an improved method of parallelization and a modified boundary condition at the plasma edge. We also report first results from the code parallelization using Message Passing Interface for a Massively Parallel CRI T3D platform. We evaluate the ALLAp code performance versus the number of T3D processors used and compare its efficiency against a Work/Data Sharing parallelization scheme and a workstation version

  8. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  9. Parallel computing in genomic research: advances and applications

    Directory of Open Access Journals (Sweden)

    Ocaña K

    2015-11-01

    Full Text Available Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. Keywords: high-performance computing, genomic research, cloud computing, grid computing, cluster computing, parallel computing

  10. Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.

    2011-01-01

    To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the

  11. Simulation model of a single-server order picking workstation using aggregate process times

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.; Biles, W.E.; Saltelli, A.; Dini, C.

    2009-01-01

    In this paper we propose a simulation modeling approach based on aggregate process times for the performance analysis of order picking workstations in automated warehouses with first-in-first-out processing of orders. The aggregate process time distribution is calculated from tote arrival and

  12. Flow time prediction for a single-server order picking workstation using aggregate process times

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.

    2010-01-01

    In this paper we propose a simulation modeling approach based on aggregate process times for the performance analysis of order picking workstations in automated warehouses. The aggregate process time distribution is calculated from tote arrival and departure times. We refer to the aggregate process

  13. Generalization of Posture Training to Computer Workstations in an Applied Setting

    Science.gov (United States)

    Sigurdsson, Sigurdur O.; Ring, Brandon M.; Needham, Mick; Boscoe, James H.; Silverman, Kenneth

    2011-01-01

    Improving employees' posture may decrease the risk of musculoskeletal disorders. The current paper is a systematic replication and extension of Sigurdsson and Austin (2008), who found that an intervention consisting of information, real-time feedback, and self-monitoring improved participant posture at mock workstations. In the current study,…

  14. 76 FR 21775 - Notice of Issuance of Final Determination Concerning Certain Office Workstations

    Science.gov (United States)

    2011-04-18

    ... Ethospace office workstations both feature ``frame-and-tile'' construction, which consists of a sturdy steel... respect to the frames, Herman Miller staff roll form rolled steel (coils) from a domestic source into....-sourced tiles, frames, connectors, finished ends, work surfaces, flipper door unit, shelf, task lights...

  15. Some Ideas on the Microcomputer and the Information/Knowledge Workstation.

    Science.gov (United States)

    Boon, J. A.; Pienaar, H.

    1989-01-01

    Identifies the optimal goal of knowledge workstations as the harmony of technology and human decision-making behaviors. Two types of decision-making processes are described and the application of each type to experimental and/or operational situations is discussed. Suggestions for technical solutions to machine-user interfaces are then offered.…

  16. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    Science.gov (United States)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  17. [Influence of different lighting levels at workstations with video display terminals on operators' work efficiency].

    Science.gov (United States)

    Janosik, Elzbieta; Grzesik, Jan

    2003-01-01

    The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.

  18. Fast Calibration of Industrial Mobile Robots to Workstations using QR Codes

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard; Damgaard, Jens Skov; Madsen, Ole

    2013-01-01

    is proposed. With this QR calibration, it is possible to calibrate an AIMM to a workstation in 3D in less than 1 second, which is significantly faster than existing methods. The accuracy of the calibration is ±4 mm. The method is modular in the sense that it directly supports integration and calibration...

  19. Evaluation plan for a cardiological multi-media workstation (I4C project)

    NARCIS (Netherlands)

    Hofstede, J.W. van der; Quak, A.B.; Ginneken, A.M. van; Macfarlane, P.W.; Watson, J.; Hendriks, P.R.; Zeelenberg, C.

    1997-01-01

    The goal of the I4C project (Integration and Communication for the Continuity of Cardiac Care) is to build a multi-media workstation for cardiac care and to assess its impact in the clinical setting. This paper describes the technical evaluation plan for the prototype.

  20. Design considerations for a neuroradiologic picture archival and image processing workstation

    International Nuclear Information System (INIS)

    Fishbein, D.S.

    1986-01-01

    The design and implementation of a small scale image archival and processing workstation for use in the study of digitized neuroradiologic images is described. The system is designed to be easily interfaced to existing equipment (presently PET, NMR and CT), function independent of a central file server, and provide for a versatile image processing environment. (Auth.)

  1. Effects of dynamic workstation Oxidesk on acceptance, physical activity, mental fitness and work performance

    NARCIS (Netherlands)

    Groenesteijn, L.; Commissaris, D.A.C.M.; Berg-Zwetsloot, M. van den; Hiemstra-Van Mastrigt, S.

    2016-01-01

    BACKGROUND: Working in an office environment is characterised by physical inactivity and sedentary behaviour. This behaviour contributes to several health risks in the long run. Dynamic workstations which allow people to combine desk activities with physical activity, may contribute to prevention of

  2. Micro machining workstation for a diode pumped Nd:YAG high brightness laser system

    NARCIS (Netherlands)

    Kleijhorst, R.A.; Offerhaus, Herman L.; Bant, P.

    1998-01-01

    A Nd:YAG micro-machining workstation that allows cutting on a scale of a few microns has been developed and operated. The system incorporates a telescope viewing system that allows control during the work and a software interface to translate AutoCad files. Some examples of the performance are

  3. Guided Learning at Workstations about Drug Prevention with Low Achievers in Science Education

    Science.gov (United States)

    Thomas, Heyne; Bogner, Franz X.

    2012-01-01

    Our study focussed on the cognitive achievement potential of low achieving eighth graders, dealing with drug prevention (cannabis). The learning process was guided by a teacher, leading this target group towards a modified learning at workstations which is seen as an appropriate approach for low achievers. We compared this specific open teaching…

  4. Treadmill workstations: the effects of walking while working on physical activity and work performance.

    Directory of Open Access Journals (Sweden)

    Avner Ben-Ner

    Full Text Available We conducted a 12-month-long experiment in a financial services company to study how the availability of treadmill workstations affects employees' physical activity and work performance. We enlisted sedentary volunteers, half of whom received treadmill workstations during the first two months of the study and the rest in the seventh month of the study. Participants could operate the treadmills at speeds of 0-2 mph and could use a standard chair-desk arrangement at will. (a Weekly online performance surveys were administered to participants and their supervisors, as well as to all other sedentary employees and their supervisors. Using within-person statistical analyses, we find that overall work performance, quality and quantity of performance, and interactions with coworkers improved as a result of adoption of treadmill workstations. (b Participants were outfitted with accelerometers at the start of the study. We find that daily total physical activity increased as a result of the adoption of treadmill workstations.

  5. The BioPhotonics Workstation: from university research to commercial prototype

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    I will outline the specifications of the compact BioPhotonics Workstation we recently have developed that utilizes high-speed spatial light modulation to generate an array of reconfigurable laser-traps making 3D real-time optical manipulation of advanced structures possible with the use of joysti...

  6. Graphics metafile interface to ARAC emergency response models for remote workstation study

    International Nuclear Information System (INIS)

    Lawver, B.S.

    1985-01-01

    The Department of Energy's Atmospheric Response Advisory Capability models are executed on computers at a central computer center with the output distributed to accident advisors in the field. The output of these atmospheric diffusion models are generated as contoured isopleths of concentrations. When these isopleths are overlayed with local geography, they become a useful tool to the accident site advisor. ARAC has developed a workstation that is located at potential accident sites. The workstation allows the accident advisor to view color plots of the model results, scale those plots and print black and white hardcopy of the model results. The graphics metafile, also known as Virtual Device Metafile (VDM) allows the models to generate a single device independent output file that is partitioned into geography, isoopleths and labeling information. The metafile is a very compact data storage technique that is output device independent. The metafile frees the model from either generating output for all known graphic devices or requiring the model to be rerun for additional graphic devices. With the partitioned metafile ARAC can transmit to the remote workstation the isopleths and labeling for each model. The geography database may not change and can be transmitted only when needed. This paper describes the important features of the remote workstation and how these features are supported by the device independent graphics metafile

  7. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  8. Functionalizing 2PP-fabricated microtools for optical manipulation on the BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Matsuoka, Tomoyo; Nishi, Masayuki; Sakakura, Masaaki

    Functionalization of the structures fabricated by two-photon polymerization was achieved by coating them with sol-gel materials, which contain calcium indicators. The structures are expected to work potentially as nano-sensors on the BioPhotonics Workstation....

  9. The effectiveness of domain balancing strategies on workstation clusters demonstrated by viscous flow problems

    NARCIS (Netherlands)

    Streng, Martin; Streng, M.; ten Cate, Eric; ten Cate, Eric (H.H.); Geurts, Bernardus J.; Kuerten, Johannes G.M.

    1998-01-01

    We consider several aspects of efficient numerical simulation of viscous compressible flow on both homogeneous and heterogeneous workstation-clusters. We consider dedicated systems, as well as clusters operating in a multi-user environment. For dedicated homogeneous clusters, we show that with

  10. Development of an EVA systems cost model. Volume 2: Shuttle orbiter crew and equipment translation concepts and EVA workstation concept development and integration

    Science.gov (United States)

    1975-01-01

    EVA crewman/equipment translational concepts are developed for a shuttle orbiter payload application. Also considered are EVA workstation systems to meet orbiter and payload requirements for integration of workstations into candidate orbiter payload worksites.

  11. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  12. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  13. Implementation of a high-resolution workstation for primary diagnosis of projection radiography images

    Science.gov (United States)

    Good, Walter F.; Herron, John M.; Maitz, Glenn S.; Gur, David; Miller, Stephen L.; Straub, William H.; Fuhrman, Carl R.

    1990-08-01

    We designed and implemented a high-resolution video workstation as the central hardware component in a comprehensive multi-project program comparing the use of digital and film modalities. The workstation utilizes a 1.8 GByte real-time disk (RCI) capable of storing 400 full-resolution images and two Tektronix (GMA251) display controllers with 19" monitors (GMA2O2). The display is configured in a portrait format with a resolution of 1536 x 2048 x 8 bit, and operates at 75 Hz in a noninterlaced mode. Transmission of data through a 12 to 8 bit lookup table into the display controllers occurs at 20 MBytes/second (.35 seconds per image). The workstation allows easy use of brightness (level) and contrast (window) to be manipulated with a trackball, and various processing options can be selected using push buttons. Display of any of the 400 images is also performed at 20MBytes/sec (.35 sec/image). A separate text display provides for the automatic display of patient history data and for a scoring form through which readers can interact with the system by means of a computer mouse. In addition, the workstation provides for the randomization of cases and for the immediate entry of diagnostic responses into a master database. Over the past year this workstation has been used for over 10,000 readings in diagnostic studies related to 1) image resolution; 2) film vs. soft display; 3) incorporation of patient history data into the reading process; and 4) usefulness of image processing.

  14. The driver workstation in commercial vehicles; Ergonomie und Design von Fahrerarbeitsplaetzen in Nutzfahrzeugen

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, W. [HAW-Hamburg (Germany)

    2003-07-01

    Nowadays, ergonomics and design are quality factors and indispensable elements of commercial vehicle design and development. Whereas a vehicle's appearance, i.e. its outside design, produces fascination and image, the design of its passenger cell focuses entirely on drivers and their tasks. Today, passenger-cell design and the ergonomics of driver workstations in commercial vehicles are clearly becoming more and more important. This article concentrates above all on defining commercial vehicle drivers, which, within the scope of research projects on coach-driver workstations, has provided new insight into the design of driver workstations. In light of the deficits determined, the research project mainly focused on designing driver workstations which were in line with the latest findings in ergonomics and human engineering. References to the methodology of driver-workstation optimization seems important in this context. The afore-mentioned innovations in the passenger cells of commercial vehicles will be explained and described by means of topical and practical examples. (orig.) [German] Ergonomie und Design sind heute Qualitaetsfaktoren und unverzichtbarer Bestandteil bei der Entwicklung von Nutzfahrzeugen. Erzeugt das Erscheinungsbild, die Aussengestaltung des Fahrzeugs, die Faszination und das Image, so ist die Innengestaltung weitgehend ganz auf die Bedienpersonen und ihre Arbeitsaufgaben bezogen. Die Innenraumgestaltung und die Ergonomie von Fahrerarbeitsplaetzen in Nutzfahrzeugen sind heute in einer Phase der deutlichen Aufwertung zu sehen. Im Beitrag wird besonders auf die Definition der Bedienpersonen fuer Nutzfahrzeuge eingegangen, die im Rahmen des Forschungsprojekts Fahrerarbeitsplatz im Reisebus zu neuen Erkenntnissen bei der Auslegung von Arbeitsplaetzen fuehrte. Gemaess der ermittelten Defizite konzentriert sich die Studie im Kern auf das Gestaltungskonzept des Fahrerarbeitsplatzes nach ergonomischen und arbeitswissenschaftlichen Erkenntnissen

  15. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  16. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  17. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  18. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  19. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  20. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed

  1. Modular high-temperature gas-cooled reactor simulation using parallel processors

    International Nuclear Information System (INIS)

    Ball, S.J.; Conklin, J.C.

    1989-01-01

    The MHPP (Modular HTGR Parallel Processor) code has been developed to simulate modular high-temperature gas-cooled reactor (MHTGR) transients and accidents. MHPP incorporates a very detailed model for predicting the dynamics of the reactor core, vessel, and cooling systems over a wide variety of scenarios ranging from expected transients to very-low-probability severe accidents. The simulations routines, which had originally been developed entirely as serial code, were readily adapted to parallel processing Fortran. The resulting parallelized simulation speed was enhanced significantly. Workstation interfaces are being developed to provide for user (operator) interaction. In this paper the benefits realized by adapting previous MHTGR codes to run on a parallel processor are discussed, along with results of typical accident analyses

  2. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  3. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  4. Interpretation of digital breast tomosynthesis: preliminary study on comparison with picture archiving and communication system (PACS) and dedicated workstation.

    Science.gov (United States)

    Kim, Young Seon; Chang, Jung Min; Yi, Ann; Shin, Sung Ui; Lee, Myung Eun; Kim, Won Hwa; Cho, Nariya; Moon, Woo Kyung

    2017-08-01

    To compare the diagnostic accuracy and efficiency in the interpretation of digital breast tomosynthesis (DBT) images using a picture archiving and communication system (PACS) and a dedicated workstation. 97 DBT images obtained for screening or diagnostic purposes were stored in both a workstation and a PACS and evaluated in combination with digital mammography by three independent radiologists retrospectively. Breast Imaging-Reporting and Data System final assessments and likelihood of malignancy (%) were assigned and the interpretation time when using the workstation and PACS was recorded. Receiver operating characteristic curve analysis, sensitivities and specificities were compared with histopathological examination and follow-up data as a reference standard. Area under the receiver operating characteristic curve values for cancer detection (0.839 vs 0.815, p = 0.6375) and sensitivity (81.8% vs 75.8%, p = 0.2188) showed no statistically significant differences between the workstation and PACS. However, specificity was significantly higher when analysing on the workstation than when using PACS (83.7% vs 76.9%, p = 0.009). When evaluating DBT images using PACS, only one case was deemed necessary to be reanalysed using the workstation. The mean time to interpret DBT images on PACS (1.68 min/case) was significantly longer than that to interpret on the workstation (1.35 min/case) (p < 0.0001). Interpretation of DBT images using PACS showed comparable diagnostic performance to a dedicated workstation, even though it required a longer reading time. Advances in knowledge: Interpretation of DBT images using PACS is an alternative to evaluate the images when a dedicated workstation is not available.

  5. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  6. Visualization of biomedical image data and irradiation planning using a parallel computing system

    International Nuclear Information System (INIS)

    Lehrig, R.

    1991-01-01

    The contribution explains the development of a novel, low-cost workstation for the processing of biomedical tomographic data sequences. The workstation was to allow both graphical display of the data and implementation of modelling software for irradiation planning, especially for calculation of dose distributions on the basis of the measured tomogram data. The system developed according to these criteria is a parallel computing system which performs secondary, two-dimensional image reconstructions irrespective of the imaging direction of the original tomographic scans. Three-dimensional image reconstructions can be generated from any direction of view, with random selection of sections of the scanned object. (orig./MM) With 69 figs., 2 tabs [de

  7. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  8. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  9. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  10. Parallel FFT using Eden Skeletons

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Lobachev, Oleg

    2009-01-01

    The paper investigates and compares skeleton-based Eden implementations of different FFT-algorithms on workstation clusters with distributed memory. Our experiments show that the basic divide-and-conquer versions suffer from an inherent input distribution and result collection problem. Advanced...

  11. Portable, parallel, reusable Krylov space codes

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B.; Gropp, W. [Argonne National Lab., IL (United States)

    1994-12-31

    Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.

  12. Parallel computing in genomic research: advances and applications.

    Science.gov (United States)

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  13. Event analysis using a massively parallel processor

    International Nuclear Information System (INIS)

    Bale, A.; Gerelle, E.; Messersmith, J.; Warren, R.; Hoek, J.

    1990-01-01

    This paper describes a system for performing histogramming of n-tuple data at interactive rates using a commercial SIMD processor array connected to a work-station running the well-known Physics Analysis Workstation software (PAW). Results indicate that an order of magnitude performance improvement over current RISC technology is easily achievable

  14. A comparison between digital images viewed on a picture archiving and communication system diagnostic workstation and on a PC-based remote viewing system by emergency physicians.

    Science.gov (United States)

    Parasyn, A; Hanson, R M; Peat, J K; De Silva, M

    1998-02-01

    Picture Archiving and Communication Systems (PACS) make possible the viewing of radiographic images on computer workstations located where clinical care is delivered. By the nature of their work this feature is particularly useful for emergency physicians who view radiographic studies for information and use them to explain results to patients and their families. However, the high cost of PACS diagnostic workstations with fuller functionality places limits on the number of and therefore the accessibility to workstations in the emergency department. This study was undertaken to establish how well less expensive personal computer-based workstations would work to support these needs of emergency physicians. The study compared the outcome of observations by 5 emergency physicians on a series of radiographic studies containing subtle abnormalities displayed on both a PACS diagnostic workstation and on a PC-based workstation. The 73 digitized radiographic studies were randomly arranged on both types of workstation over four separate viewing sessions for each emergency physician. There was no statistical difference between a PACS diagnostic workstation and a PC-based workstation in this trial. The mean correct ratings were 59% on the PACS diagnostic workstations and 61% on the PC-based workstations. These findings also emphasize the need for prompt reporting by a radiologist.

  15. Design of the HANARO operator workstation having the enhanced usability and data handling capability

    International Nuclear Information System (INIS)

    Kim, M. J.; Kim, Y. K.; Jung, H. S.; Choi, Y. S.; Woo, J. S.; Jeon, B. J.

    2003-01-01

    As a first step to the upgrade plan of the HANARO reactor control computer system, we furnished IBM workstation class PC to replace the existing operator workstation, the dedicated HMI console. Also designed is the new human-machine interface by using the commercial HMI development software that is operating on the MS-Windows. We expect that we would not have any more difficulties in preparing replacement parts and providing maintenance of hardware. In this paper, we introduce the features of new interface, which adopted the virtue of the existing design and enabled the safe and efficient reactor operation by correcting the demerits. Also described are the functionality of historian server that provides the simpler storage, retrieval and search operation and the design of trend display screen that replaces the existing chart recorder by using the dual monitor feature of PC graphic card

  16. Experience in using workstations as hosts in an accelerator control environment

    International Nuclear Information System (INIS)

    Abola, A.; Casella, R.; Clifford, T.; Hoff, L.; Katz, R.; Kennell, S.; Mandell, S.; McBreen, E.; Weygand, D.P.

    1987-01-01

    A new control system has been used for light ion acceleration at the Alternating Gradient Synchrotron (AGS). The control system uses Apollo workstations in the dual role of console hardware computer and controls system host. It has been found that having a powerful dedicated CPU with a demand paging virtual memory OS featuring strong interprocess communication, mapped memory shared files, shared code, and multi-window capabilities, allows us to provide an efficient operation environment in which users may view and manage several control processes simultaneously. The same features which make workstations good console computers also provide an outstanding platform for code development. The software for the system, consisting of about 30K lines of ''C'' code, was developed on schedule, ready for light ion commissioning. System development is continuing with work being done on applications programs

  17. Design of a Workstation for People with Upper-Limb Disabilities Using a Brain Computer Interface

    Directory of Open Access Journals (Sweden)

    John E. Muñoz-Cardona

    2013-11-01

    Full Text Available  This paper shows the design of work-station for work-related inclusion people upper-limb disability. The system involves the use of novel brain computer interface used to bridge the user-computer interaction. Our hope objective is elucidating functional, technological, ergonomic and procedural aspects to runaway operation station; with propose to scratch barrier to impossibility access to TIC’s tools and work done for individual disability person. We found access facility ergonomics, adaptability and portable issue of workstation are most important design criteria. Prototype implementations in workplace environment have TIR estimate of 43% for retrieve. Finally we list a typology of services that could be the most appropriate for the process of labor including: telemarketing, telesales, telephone surveys, order taking, social assistance in disasters, general information and inquiries, reservations at tourist sites, technical support, emergency, online support and after-sales services.

  18. Experience in using workstations as hosts in an accelerator control environment

    International Nuclear Information System (INIS)

    Abola, A.; Casella, R.; Clifford, T.; Hoff, L.; Katz, R.; Kennell, S.; Mandell, S.; McBreen, E.; Weygand, D.P.

    1987-03-01

    A new control system has been used for light ion acceleration at the Alternating Gradient Synchrotron (AGS). The control system uses Apollo workstations in the dual role of console hardware computer and controls system host. It has been found that having a powerful dedicated CPU with a demand paging virtual memory OS featuring strong interprocess communication, mapped memory shared files, shared code, and multi-window capabilities, allows us to provide an efficient operation environment in which users may view and manage several control processes simultaneously. The same features which make workstations good console computers also provide an outstanding platform for code development. The software for the system, consisting of about 30K lines of ''C'' code, was developed on schedule, ready for light ion commissioning. System development is continuing with work being done on applications programs

  19. Utilization of a multimedia PACS workstation for surgical planning of epilepsy

    Science.gov (United States)

    Soo Hoo, Kent; Wong, Stephen T.; Hawkins, Randall A.; Knowlton, Robert C.; Laxer, Kenneth D.; Rowley, Howard A.

    1997-05-01

    Surgical treatment of temporal lobe epilepsy requires the localization of the epileptogenic zone for surgical resection. Currently, clinicians utilize electroencephalography, various neuroimaging modalities, and psychological tests together to determine the location of this zone. We investigate how a multimedia neuroimaging workstation built on top of the UCSF Picture Archiving and Communication System can be used to aid surgical planning of epilepsy and related brain diseases. This usage demonstrates the ability of the workstation to retrieve image and textural data from PACS and other image sources, register multimodality images, visualize and render 3D data sets, analyze images, generate new image and text data from the analysis, and organize all data in a relational database management system.

  20. Effect of immediate feedback training on observer performance on a digital radiology workstation

    International Nuclear Information System (INIS)

    Mc Neill, K.M.; Maloney, K.; Elam, E.A.; Hillman, B.J.; Witzke, D.B.

    1990-01-01

    This paper reports on testing the hypothesis that training radiologists on a digital workstation would affect their efficiency and subjective acceptance of radiologic interpretation based on images shown on a cathode ray tub (CRT). Using a digital radiology workstation, six faculty radiologists and four senior residents read seven groups of six images each. In each group, three images were ranked as easy and three were ranked as difficult. All images were abnormal posteroanterior chest radiographs. On display of each image, the observer was asked which findings were present. After the observer listed his or her findings, the experimenter listed any findings not mentioned and pointed out any incorrect findings. The time to finding was recorded for each image, along with the number of corrections and missed findings. A postexperiment questionnaire was given to obtain subjective responses from the observers

  1. The development of a Flight Test Engineer's Workstation for the Automated Flight Test Management System

    Science.gov (United States)

    Tartt, David M.; Hewett, Marle D.; Duke, Eugene L.; Cooper, James A.; Brumbaugh, Randal W.

    1989-01-01

    The Automated Flight Test Management System (ATMS) is being developed as part of the NASA Aircraft Automation Program. This program focuses on the application of interdisciplinary state-of-the-art technology in artificial intelligence, control theory, and systems methodology to problems of operating and flight testing high-performance aircraft. The development of a Flight Test Engineer's Workstation (FTEWS) is presented, with a detailed description of the system, technical details, and future planned developments. The goal of the FTEWS is to provide flight test engineers and project officers with an automated computer environment for planning, scheduling, and performing flight test programs. The FTEWS system is an outgrowth of the development of ATMS and is an implementation of a component of ATMS on SUN workstations.

  2. Computer modeling and design of diagnostic workstations and radiology reading rooms

    Science.gov (United States)

    Ratib, Osman M.; Amato, Carlos L.; Balbona, Joseph A.; Boots, Kevin; Valentino, Daniel J.

    2000-05-01

    We used 3D modeling techniques to design and evaluate the ergonomics of diagnostic workstation and radiology reading room in the planning phase of building a new hospital at UCLA. Given serious space limitations, the challenge was to provide more optimal working environment for radiologists in a crowded and busy environment. A particular attention was given to flexibility, lighting condition and noise reduction in rooms shared by multiple users performing diagnostic tasks as well as regular clinical conferences. Re-engineering workspace ergonomics rely on the integration of new technologies, custom designed cabinets, indirect lighting, sound-absorbent partitioning and geometric arrangement of workstations to allow better privacy while optimizing space occupation. Innovations included adjustable flat monitors, integration of videoconferencing and voice recognition, control monitor and retractable keyboard for optimal space utilization. An overhead compartment protecting the monitors from ambient light is also used as accessory lightbox and rear-view projection screen for conferences.

  3. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  4. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software. Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  5. Effects of dynamic workstation Oxidesk on acceptance, physical activity, mental fitness and work performance.

    Science.gov (United States)

    Groenesteijn, L; Commissaris, D A C M; Van den Berg-Zwetsloot, M; Hiemstra-Van Mastrigt, S

    2016-07-19

    Working in an office environment is characterised by physical inactivity and sedentary behaviour. This behaviour contributes to several health risks in the long run. Dynamic workstations which allow people to combine desk activities with physical activity, may contribute to prevention of these health risks. A dynamic workstation, called Oxidesk, was evaluated to determine the possible contribution to healthy behaviour and the impact on perceived work performance. A field test was conducted with 22 office workers, employed at a health insurance company in the Netherlands. The Oxidesk was well accepted, positively perceived for fitness and the participants maintained their work performance. Physical activity was lower than the activity level required in the Dutch guidelines for sufficient physical activity. Although there was a slight increase in physical activity, the Oxidesk may be helpful in the reducing health risks involved and seems applicable for introduction to office environments.

  6. Nuclear power plant simulation using advanced simulation codes through a state-of-the-art workstation

    International Nuclear Information System (INIS)

    Laats, E.T.; Hagen, R.N.

    1985-01-01

    The Nuclear Plant Analyzer (NPA) currently resides in a Control Data Corporation 176 mainframe computer at the Idaho National Engineering Laboratory (INEL). The NPA user community is expanding to include worldwide users who cannot consistently access the INEL mainframe computer from their own facilities. Thus, an alternate mechanism is needed to enable their use of the NPA. Therefore, a feasibility study was undertaken by EG and G Idaho to evaluate the possibility of developing a standalone workstation dedicated to the NPA

  7. The safety monitor and RCM workstation as complementary tools in risk based maintenance optimization

    International Nuclear Information System (INIS)

    Rawson, P.D.

    2000-01-01

    Reliability Centred Maintenance (RCM) represents a proven technique for rendering maintenance activities safer, more effective, and less expensive, in terms of systems unavailability and resource management. However, it is believed that RCM can be enhanced by the additional consideration of operational plant risk. This paper discusses how two computer-based tools, i.e., the RCM Workstation and the Safety Monitor, can complement each other in helping to create a living preventive maintenance strategy. (author)

  8. Active Workstations Do Not Impair Executive Function in Young and Middle-Age Adults.

    Science.gov (United States)

    Ehmann, Peter J; Brush, Christopher J; Olson, Ryan L; Bhatt, Shivang N; Banu, Andrea H; Alderman, Brandon L

    2017-05-01

    This study aimed to examine the effects of self-selected low-intensity walking on an active workstation on executive functions (EF) in young and middle-age adults. Using a within-subjects design, 32 young (20.6 ± 2.0 yr) and 26 middle-age (45.6 ± 11.8 yr) adults performed low-intensity treadmill walking and seated control conditions in randomized order on separate days, while completing an EF test battery. EF was assessed using modified versions of the Stroop (inhibition), Sternberg (working memory), Wisconsin Card Sorting (cognitive flexibility), and Tower of London (global EF) cognitive tasks. Behavioral performance outcomes were assessed using composite task z-scores and traditional measures of reaction time and accuracy. Average HR and step count were also measured throughout. The expected task difficulty effects were found for reaction time and accuracy. No significant main effects or interactions as a function of treadmill walking were found for tasks assessing global EF and the three individual EF domains. Accuracy on the Tower of London task was slightly impaired during slow treadmill walking for both age-groups. Middle-age adults displayed longer planning times for more difficult conditions of the Tower of London during walking compared with sitting. A 50-min session of low-intensity treadmill walking on an active workstation resulted in accruing approximately 4500 steps. These findings suggest that executive function performance remains relatively unaffected while walking on an active workstation, further supporting the use of treadmill workstations as an effective approach to increase physical activity and reduce sedentary time in the workplace.

  9. Networking issues---Lan and Wan needs---The impact of workstations

    International Nuclear Information System (INIS)

    Harvey, J.

    1990-01-01

    This review focuses on the use of networks in the LEP experiments at CERN. The role of the extended LAN at CERN is discussed in some detail, with particular emphasis on the impact the sudden growth in the use of workstations is having. The problem of network congestion is highlighted and possible evolution to FDDI mentioned. The status and use of the wide area connections are also reported

  10. Workplace sitting and height-adjustable workstations: a randomized controlled trial.

    Science.gov (United States)

    Neuhaus, Maike; Healy, Genevieve N; Dunstan, David W; Owen, Neville; Eakin, Elizabeth G

    2014-01-01

    Desk-based office employees sit for most of their working day. To address excessive sitting as a newly identified health risk, best practice frameworks suggest a multi-component approach. However, these approaches are resource intensive and knowledge about their impact is limited. To compare the efficacy of a multi-component intervention to reduce workplace sitting time, to a height-adjustable workstations-only intervention, and to a comparison group (usual practice). Three-arm quasi-randomized controlled trial in three separate administrative units of the University of Queensland, Brisbane, Australia. Data were collected between January and June 2012 and analyzed the same year. Desk-based office workers aged 20-65 (multi-component intervention, n=16; workstations-only, n=14; comparison, n=14). The multi-component intervention comprised installation of height-adjustable workstations and organizational-level (management consultation, staff education, manager e-mails to staff) and individual-level (face-to-face coaching, telephone support) elements. Workplace sitting time (minutes/8-hour workday) assessed objectively via activPAL3 devices worn for 7 days at baseline and 3 months (end-of-intervention). At baseline, the mean proportion of workplace sitting time was approximately 77% across all groups (multi-component group 366 minutes/8 hours [SD=49]; workstations-only group 373 minutes/8 hours [SD=36], comparison 365 minutes/8 hours [SD=54]). Following intervention and relative to the comparison group, workplace sitting time in the multi-component group was reduced by 89 minutes/8-hour workday (95% CI=-130, -47 minutes; pworkplace sitting. These findings may have important practical and financial implications for workplaces targeting sitting time reductions. Australian New Zealand Clinical Trials Registry 00363297. © 2013 American Journal of Preventive Medicine Published by American Journal of Preventive Medicine All rights reserved.

  11. Energy consumption of workstations and external devices in school of business and information technology

    OpenAIRE

    Koret, Jere

    2012-01-01

    The purpose of this thesis was to measure energy consumption of workstations and external devices in School of Business and Information Technology and search for possible solutions to reduce electricity consumption. The commissionaire for the thesis was Oulu University of Applied Sciences School of Business and Information Management unit. The reason for the study is that School of Business and Information Management has a environmental plan which is based on ISO standard 14001 and this t...

  12. Comparison of computer workstation with light box for detecting setup errors from portal images

    International Nuclear Information System (INIS)

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Raghavan, Suraj; Coffey, Christopher S.; Major, Stacey A.; Muller, Keith E.

    1999-01-01

    Purpose: Observer studies were conducted to test the hypothesis that radiation oncologists using a computer workstation for portal image analysis can detect setup errors at least as accurately as when following standard clinical practice of inspecting portal films on a light box. Methods and Materials: In a controlled observer study, nine radiation oncologists used a computer workstation, called PortFolio, to detect setup errors in 40 realistic digitally reconstructed portal radiograph (DRPR) images. PortFolio is a prototype workstation for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools for image enhancement; alignment of crosshairs, field edges, and anatomic structures on reference and acquired images; measurement of distances and angles; and viewing registered images superimposed on one another. The test DRPRs contained known in-plane translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Test images used in the study were also printed on film for observers to view on a light box and interpret using standard clinical practice. The mean accuracy for error detection for each approach was measured and the results were compared using repeated measures analysis of variance (ANOVA) with the Geisser-Greenhouse test statistic. Results: The results indicate that radiation oncologists participating in this study could detect and quantify in-plane rotation and translation errors more accurately with PortFolio compared to standard clinical practice. Conclusions: Based on the results of this limited study, it is reasonable to conclude that workstations similar to PortFolio can be used efficaciously in clinical practice

  13. Issues about home computer workstations and primary school children in Hong Kong: a pilot study.

    Science.gov (United States)

    Py Szeto, Grace; Tsui, Macy Mei Sze; Sze, Winky Wing Yu; Chan, Irene Sin Ting; Chung, Cyrus Chak Fai; Lee, Felix Wai Kit

    2014-01-01

    All around the world, there is a rising trend of computer use among young children especially at home; yet the computer furniture is usually not designed specifically for children's use. In Hong Kong, this creates an even greater problem as most people live in very small apartments in high-rise buildings. Most of the past research literature is focused on computer use in children in the school environment and not about the home setting. The present pilot study aimed to examine ergonomic issues in children's use of computers at home in Hong Kong, which has some unique home environmental issues. Fifteen children (six male, nine female) aged 8-11 years and their parents were recruited by convenience sampling. Participants were asked to provide information on their computer use habits and related musculoskeletal symptoms. Participants were photographed when sitting at the computer workstation in their usual postures and joint angles were measured. The participants used computers frequently for less than two hours daily and the majority shared their workstations with other family members. Computer furniture was designed more for adult use and a mismatch of furniture and body size was found. Ergonomic issues included inappropriate positioning of the display screen, keyboard, and mouse, as well as lack of forearm support and suitable backrest. These led to awkward or constrained postures while some postural problems may be habitual. Three participants reported neck and shoulder discomfort in the past 12 months and 4 reported computer-related discomfort. Inappropriate computer workstation settings may have adverse effects on children's postures. More research on workstation setup at home, where children may use their computers the most, is needed.

  14. Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.

    Science.gov (United States)

    Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti

    2006-02-01

    Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.

  15. Computed radiography and the workstation in a study of the cervical spine. Technical and cost implications

    International Nuclear Information System (INIS)

    Garcia, J. M.; Lopez-Galiacho, N.; Martinez, M.

    1999-01-01

    To demonstrate the advantages of computed radiography and the workstation in assessing the images acquired in a study of the cervical spine. Lateral projections of cervical spine obtained using a computed radiography system in 63 ambulatory patients were studied in a workstation. Images of the tip of the odontoid process. C1-C2, basion-opisthion and C7 were visualized prior to and after their transmission and processing, and the overall improvement in their diagnostic quality was assessed. The rate of detection of the tip of the odontoid process, C1-C2, the foramen magnum and C/ increased by 17,6, 11 and 14 percentage points, respectively. Image processing improved the diagnostic quality in over 75% of cases. Image processing in a workstation improved the visualization of the anatomical points being studied and the diagnostic quality of the images. These advantages as well as the possibility of transferring the images to a picture archiving and communication system (PACS) are convincing reasons for using digital radiography. (Author) 7 refs

  16. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  17. A user interface on networked workstations for MFTF-B plasma diagnostic instruments

    International Nuclear Information System (INIS)

    Balch, T.R.; Renbarger, V.L.

    1986-01-01

    A network of Sun-2/170 workstations is used to provide an interface to the MFTF-B Plasma Diagnostics System at Lawrence Livermore National Laboratory. The Plasma Diagnostics System (PDS) is responsible for control of MFTF-B plasma diagnostic instrumentation. An EtherNet Local Area Network links the workstations to a central multiprocessing system which furnishes data processing, data storage and control services for PDS. These workstations permit a physicist to command data acquisition, data processing, instrument control, and display of results. The interface is implemented as a metaphorical desktop, which helps the operator form a mental model of how the system works. As on a real desktop, functions are provided by sheets of paper (windows on a CRT screen) called worksheets. The worksheets may be invoked by pop-up menus and may be manipulated with a mouse. These worksheets are actually tasks that communicate with other tasks running in the central computer system. By making entries in the appropriate worksheet, a physicist may specify data acquisition or processing, control a diagnostic, or view a result

  18. A real-time monitoring/emergency response modeling workstation for a tritium facility

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sims, J.M.; Baskett, R.L.

    1993-07-01

    At Lawrence Livermore National Laboratory (LLNL) we developed a real-time system to monitor two stacks on our tritium handling facility. The monitors transmit the stack data to a workstation which computes a 3D numerical model of atmospheric dispersion. The workstation also collects surface and upper air data from meteorological towers and a sodar. The complex meteorological and terrain setting in the Livermore Valley demands more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion than afforded by Gaussian models. We experience both mountain valley and sea breeze flows. To address these complexities, we have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on the workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy's Atmospheric Release Advisory Capability (ARAC[1,2]) project

  19. A bench-top automated workstation for nucleic acid isolation from clinical sample types.

    Science.gov (United States)

    Thakore, Nitu; Garber, Steve; Bueno, Arial; Qu, Peter; Norville, Ryan; Villanueva, Michael; Chandler, Darrell P; Holmberg, Rebecca; Cooney, Christopher G

    2018-04-18

    Systems that automate extraction of nucleic acid from cells or viruses in complex clinical matrices have tremendous value even in the absence of an integrated downstream detector. We describe our bench-top automated workstation that integrates our previously-reported extraction method - TruTip - with our newly-developed mechanical lysis method. This is the first report of this method for homogenizing viscous and heterogeneous samples and lysing difficult-to-disrupt cells using "MagVor": a rotating magnet that rotates a miniature stir disk amidst glass beads confined inside of a disposable tube. Using this system, we demonstrate automated nucleic acid extraction from methicillin-resistant Staphylococcus aureus (MRSA) in nasopharyngeal aspirate (NPA), influenza A in nasopharyngeal swabs (NPS), human genomic DNA from whole blood, and Mycobacterium tuberculosis in NPA. The automated workstation yields nucleic acid with comparable extraction efficiency to manual protocols, which include commercially-available Qiagen spin column kits, across each of these sample types. This work expands the scope of applications beyond previous reports of TruTip to include difficult-to-disrupt cell types and automates the process, including a method for removal of organics, inside a compact bench-top workstation. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  1. Applying human factors to the design of control centre and workstation of a nuclear reactor

    International Nuclear Information System (INIS)

    Santos, Isaac J.A. Luquetti dos; Carvalho, Paulo V.R.; Goncalves, Gabriel de L.; Souza, Tamara D.M.F.; Falcao, Mariana A.

    2013-01-01

    Human factors is a body of scientific factors about human characteristics, covering biomedical, psychological and psychosocial considerations, including principles and applications in the personnel selection areas, training, job performance aid tools and human performance evaluation. Control Centre is a combination of control rooms, control suites and local control stations which are functionally related and all on the same site. Digital control room includes an arrangement of systems, equipment such as computers and communication terminals and workstations at which control and monitoring functions are conducted by operators. Inadequate integration between control room and operators reduces safety, increases the operation complexity, complicates operator training and increases the likelihood of human errors occurrence. The objective of this paper is to present a specific approach for the conceptual and basic design of the control centre and workstation of a nuclear reactor used to produce radioisotope. The approach is based on human factors standards, guidelines and the participation of a multidisciplinary team in the conceptual and basic phases of the design. Using the information gathered from standards and from the multidisciplinary team, an initial sketch 3D of the control centre and workstation are being developed. (author)

  2. Applying human factors to the design of control centre and workstation of a nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Isaac J.A. Luquetti dos; Carvalho, Paulo V.R.; Goncalves, Gabriel de L., E-mail: luquetti@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Souza, Tamara D.M.F.; Falcao, Mariana A. [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil). Dept. de Desenho Industrial

    2013-07-01

    Human factors is a body of scientific factors about human characteristics, covering biomedical, psychological and psychosocial considerations, including principles and applications in the personnel selection areas, training, job performance aid tools and human performance evaluation. Control Centre is a combination of control rooms, control suites and local control stations which are functionally related and all on the same site. Digital control room includes an arrangement of systems, equipment such as computers and communication terminals and workstations at which control and monitoring functions are conducted by operators. Inadequate integration between control room and operators reduces safety, increases the operation complexity, complicates operator training and increases the likelihood of human errors occurrence. The objective of this paper is to present a specific approach for the conceptual and basic design of the control centre and workstation of a nuclear reactor used to produce radioisotope. The approach is based on human factors standards, guidelines and the participation of a multidisciplinary team in the conceptual and basic phases of the design. Using the information gathered from standards and from the multidisciplinary team, an initial sketch 3D of the control centre and workstation are being developed. (author)

  3. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  4. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  5. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  6. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    Science.gov (United States)

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  7. A standardized non-instrumental tool for characterizing workstations concerned with exposure to engineered nanomaterials

    Science.gov (United States)

    Canu I, Guseva; C, Ducros; S, Ducamp; L, Delabre; S, Audignon-Durand; C, Durand; Y, Iwatsubo; D, Jezewski-Serra; Bihan O, Le; S, Malard; A, Radauceanu; M, Reynier; M, Ricaud; O, Witschger

    2015-05-01

    The French national epidemiological surveillance program EpiNano aims at surveying mid- and long-term health effects possibly related with occupational exposure to either carbon nanotubes or titanium dioxide nanoparticles (TiO2). EpiNano is limited to workers potentially exposed to these nanomaterials including their aggregates and agglomerates. In order to identify those workers during the in-field industrial hygiene visits, a standardized non-instrumental method is necessary especially for epidemiologists and occupational physicians unfamiliar with nanoparticle and nanomaterial exposure metrology. A working group, Quintet ExpoNano, including national experts in nanomaterial metrology and occupational hygiene reviewed available methods, resources and their practice in order to develop a standardized tool for conducting company industrial hygiene visits and collecting necessary information. This tool, entitled “Onsite technical logbook”, includes 3 parts: company, workplace, and workstation allowing a detailed description of each task, process and exposure surrounding conditions. This logbook is intended to be completed during the company industrial hygiene visit. Each visit is conducted jointly by an industrial hygienist and an epidemiologist of the program and lasts one or two days depending on the company size. When all collected information is computerized using friendly-using software, it is possible to classify workstations with respect to their potential direct and/or indirect exposure. Workers appointed to workstations classified as concerned with exposure are considered as eligible for EpiNano program and invited to participate. Since January 2014, the Onsite technical logbook has been used in ten company visits. The companies visited were mostly involved in research and development. A total of 53 workstations with potential exposure to nanomaterials were pre-selected and observed: 5 with TiO2, 16 with single-walled carbon nanotubes, 27 multiwalled

  8. A standardized non-instrumental tool for characterizing workstations concerned with exposure to engineered nanomaterials

    International Nuclear Information System (INIS)

    I, Guseva Canu; S, Ducamp; L, Delabre; Y, Iwatsubo; D, Jezewski-Serra; C, Ducros; S, Audignon-Durand; C, Durand; O, Le Bihan; S, Malard; A, Radauceanu; M, Reynier; M, Ricaud; O, Witschger

    2015-01-01

    The French national epidemiological surveillance program EpiNano aims at surveying mid- and long-term health effects possibly related with occupational exposure to either carbon nanotubes or titanium dioxide nanoparticles (TiO 2 ). EpiNano is limited to workers potentially exposed to these nanomaterials including their aggregates and agglomerates. In order to identify those workers during the in-field industrial hygiene visits, a standardized non-instrumental method is necessary especially for epidemiologists and occupational physicians unfamiliar with nanoparticle and nanomaterial exposure metrology. A working group, Quintet ExpoNano, including national experts in nanomaterial metrology and occupational hygiene reviewed available methods, resources and their practice in order to develop a standardized tool for conducting company industrial hygiene visits and collecting necessary information. This tool, entitled “Onsite technical logbook”, includes 3 parts: company, workplace, and workstation allowing a detailed description of each task, process and exposure surrounding conditions. This logbook is intended to be completed during the company industrial hygiene visit. Each visit is conducted jointly by an industrial hygienist and an epidemiologist of the program and lasts one or two days depending on the company size. When all collected information is computerized using friendly-using software, it is possible to classify workstations with respect to their potential direct and/or indirect exposure. Workers appointed to workstations classified as concerned with exposure are considered as eligible for EpiNano program and invited to participate. Since January 2014, the Onsite technical logbook has been used in ten company visits. The companies visited were mostly involved in research and development. A total of 53 workstations with potential exposure to nanomaterials were pre-selected and observed: 5 with TiO 2 , 16 with single-walled carbon nanotubes, 27 multiwalled

  9. Stand by Me: Qualitative Insights into the Ease of Use of Adjustable Workstations.

    Science.gov (United States)

    Leavy, Justine; Jancey, Jonine

    2016-01-01

    Office workers sit for more than 80% of the work day making them an important target for work site health promotion interventions to break up prolonged sitting time. Adjustable workstations are one strategy used to reduce prolonged sitting time. This study provides both an employees' and employers' perspective into the advantages, disadvantages, practicality and convenience of adjustable workstations and how movement in the office can be further supported by organisations. This qualitative study was part of the Uprising pilot study. Employees were from the intervention arm of a two group (intervention n = 18 and control n = 18) study. Employers were the immediate line-manager of the employee. Data were collected via employee focus groups (n = 17) and employer individual interviews (n = 12). The majority of participants were female (n = 18), had healthy weight, and had a post-graduate qualification. All focus group discussions and interviews were recorded, transcribed verbatim and the data coded according to the content. Qualitative content analysis was conducted. Employee data identified four concepts: enhanced general wellbeing; workability and practicality; disadvantages of the retro-fit; and triggers to stand. Most employees (n = 12) reported enhanced general well-being, workability and practicality included less email exchange and positive interaction (n = 5), while the instability of the keyboard a commonly cited disadvantage. Triggers to stand included time and task based prompts. Employer data concepts included: general health and wellbeing; work engagement; flexibility; employee morale; and injury prevention. Over half of the employers (n = 7) emphasised back care and occupational health considerations as important, as well as increased level of staff engagement and strategies to break up prolonged periods of sitting. The focus groups highlight the perceived general health benefits from this short intervention, including opportunity to sit less and interact

  10. Stand by Me: Qualitative Insights into the Ease of Use of Adjustable Workstations

    Directory of Open Access Journals (Sweden)

    Jonine Jancey

    2016-08-01

    Full Text Available Background: Office workers sit for more than 80% of the work day making them an important target for work site health promotion interventions to break up prolonged sitting time. Adjustable workstations are one strategy used to reduce prolonged sitting time. This study provides both an employees’ and employers’ perspective into the advantages, disadvantages, practicality and convenience of adjustable workstations and how movement in the office can be further supported by organisations. This qualitative study was part of the Uprising pilot study. Employees were from the intervention arm of a two group (intervention n = 18 and control n = 18 study. Employers were the immediate line-manager of the employee. Data were collected via employee focus groups (n = 17 and employer individual interviews (n = 12. The majority of participants were female (n = 18, had healthy weight, and had a post-graduate qualification. All focus group discussions and interviews were recorded, transcribed verbatim and the data coded according to the content. Qualitative content analysis was conducted. Results: Employee data identified four concepts: enhanced general wellbeing; workability and practicality; disadvantages of the retro-fit; and triggers to stand. Most employees (n = 12 reported enhanced general well-being, workability and practicality included less email exchange and positive interaction (n = 5, while the instability of the keyboard a commonly cited disadvantage. Triggers to stand included time and task based prompts. Employer data concepts included: general health and wellbeing; work engagement; flexibility; employee morale; and injury prevention. Over half of the employers (n = 7 emphasised back care and occupational health considerations as important, as well as increased level of staff engagement and strategies to break up prolonged periods of sitting. Discussion: The focus groups highlight the perceived general health benefits from this short

  11. Beam dynamics calculations and particle tracking using massively parallel processors

    International Nuclear Information System (INIS)

    Ryne, R.D.; Habib, S.

    1995-01-01

    During the past decade massively parallel processors (MPPs) have slowly gained acceptance within the scientific community. At present these machines typically contain a few hundred to one thousand off-the-shelf microprocessors and a total memory of up to 32 GBytes. The potential performance of these machines is illustrated by the fact that a month long job on a high end workstation might require only a few hours on an MPP. The acceptance of MPPs has been slow for a variety of reasons. For example, some algorithms are not easily parallelizable. Also, in the past these machines were difficult to program. But in recent years the development of Fortran-like languages such as CM Fortran and High Performance Fortran have made MPPs much easier to use. In the following we will describe how MPPs can be used for beam dynamics calculations and long term particle tracking

  12. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  13. Performance of MPI parallel processing implemented by MCNP5/ MCNPX for criticality benchmark problems

    International Nuclear Information System (INIS)

    Mark Dennis Usang; Mohd Hairie Rabir; Mohd Amin Sharifuldin Salleh; Mohamad Puad Abu

    2012-01-01

    MPI parallelism are implemented on a SUN Workstation for running MCNPX and on the High Performance Computing Facility (HPC) for running MCNP5. 23 input less obtained from MCNP Criticality Validation Suite are utilized for the purpose of evaluating the amount of speed up achievable by using the parallel capabilities of MPI. More importantly, we will study the economics of using more processors and the type of problem where the performance gain are obvious. This is important to enable better practices of resource sharing especially for the HPC facilities processing time. Future endeavours in this direction might even reveal clues for best MCNP5/ MCNPX coding practices for optimum performance of MPI parallelisms. (author)

  14. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  15. Parallel MCNP Monte Carlo transport calculations with MPI

    International Nuclear Information System (INIS)

    Wagner, J.C.; Haghighat, A.

    1996-01-01

    The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected

  16. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  17. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  18. Run-Time and Compiler Support for Programming in Adaptive Parallel Environments

    Directory of Open Access Journals (Sweden)

    Guy Edjlali

    1997-01-01

    Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.

  19. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper we present a simple but efficient parallel algorithm based on the message passing host/node programing model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, witch is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SP1 and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SP1. Because of heterogeneity of the workstation network, we did ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors. (author). 5 refs., 6 figs., 2 tabs

  20. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors

  1. The integrated workstation: A common, consistent link between nuclear plant personnel and plant information and computerized resources

    International Nuclear Information System (INIS)

    Wood, R.T.; Knee, H.E.; Mullens, J.A.; Munro, J.K. Jr.; Swail, B.K.; Tapp, P.A.

    1993-01-01

    The increasing use of computer technology in the US nuclear power industry has greatly expanded the capability to obtain, analyze, and present data about the plant to station personnel. Data concerning a power plant's design, configuration, operational and maintenance histories, and current status, and the information that can be derived from them, provide the link between the plant and plant staff. It is through this information bridge that operations, maintenance and engineering personnel understand and manage plant performance. However, it is necessary to transform the vast quantity of data available from various computer systems and across communications networks into clear, concise, and coherent information. In addition, it is important to organize this information into a consolidated, structured form within an integrated environment so that various users throughout the plant have ready access at their local station to knowledge necessary for their tasks. Thus, integrated workstations are needed to provide the inquired information and proper software tools, in a manner that can be easily understood and used, to the proper users throughout the plant. An effort is underway at the Oak Ridge National Laboratory to address this need by developing Integrated Workstation functional requirements and implementing a limited-scale prototype demonstration. The integrated Workstation requirements will define a flexible, expandable computer environment that permits a tailored implementation of workstation capabilities and facilitates future upgrades to add enhanced applications. The functionality to be supported by the integrated workstation and inherent capabilities to be provided by the workstation environment win be described. In addition, general technology areas which are to be addressed in the Integrated Workstation functional requirements will be discussed

  2. Fast implementations of 3D PET reconstruction using vector and parallel programming techniques

    International Nuclear Information System (INIS)

    Guerrero, T.M.; Cherry, S.R.; Dahlbom, M.; Ricci, A.R.; Hoffman, E.J.

    1993-01-01

    Computationally intensive techniques that offer potential clinical use have arisen in nuclear medicine. Examples include iterative reconstruction, 3D PET data acquisition and reconstruction, and 3D image volume manipulation including image registration. One obstacle in achieving clinical acceptance of these techniques is the computational time required. This study focuses on methods to reduce the computation time for 3D PET reconstruction through the use of fast computer hardware, vector and parallel programming techniques, and algorithm optimization. The strengths and weaknesses of i860 microprocessor based workstation accelerator boards are investigated in implementations of 3D PET reconstruction

  3. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  4. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  5. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  6. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  7. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  8. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  9. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  10. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  11. A workstation-integrated peer review quality assurance program: pilot study

    Science.gov (United States)

    2013-01-01

    Background The surrogate indicator of radiological excellence that has become accepted is consistency of assessments between radiologists, and the technique that has become the standard for evaluating concordance is peer review. This study describes the results of a workstation-integrated peer review program in a busy outpatient radiology practice. Methods Workstation-based peer review was performed using the software program Intelerad Peer Review. Cases for review were randomly chosen from those being actively reported. If an appropriate prior study was available, and if the reviewing radiologist and the original interpreting radiologist had not exceeded review targets, the case was scored using the modified RADPEER system. Results There were 2,241 cases randomly assigned for peer review. Of selected cases, 1,705 (76%) were interpreted. Reviewing radiologists agreed with prior reports in 99.1% of assessments. Positive feedback (score 0) was given in three cases (0.2%) and concordance (scores of 0 to 2) was assigned in 99.4%, similar to reported rates of 97.0% to 99.8%. Clinically significant discrepancies (scores of 3 or 4) were identified in 10 cases (0.6%). Eighty-eight percent of reviewed radiologists found the reviews worthwhile, 79% found scores appropriate, and 65% felt feedback was appropriate. Two-thirds of radiologists found case rounds discussing significant discrepancies to be valuable. Conclusions The workstation-based computerized peer review process used in this pilot project was seamlessly incorporated into the normal workday and met most criteria for an ideal peer review system. Clinically significant discrepancies were identified in 0.6% of cases, similar to published outcomes using the RADPEER system. Reviewed radiologists felt the process was worthwhile. PMID:23822583

  12. An advanced tube wear and fatigue workstation to predict flow induced vibrations of steam generator tubes

    International Nuclear Information System (INIS)

    Gay, N.; Baratte, C.; Flesch, B.

    1997-01-01

    Flow induced tube vibration damage is a major concern for designers and operators of nuclear power plant steam generators (SG). The operating flow-induced vibrational behaviour has to be estimated accurately to allow a precise evaluation of the new safety margins in order to optimize the maintenance policy. For this purpose, an industrial 'Tube Wear and Fatigue Workstation', called 'GEVIBUS Workstation' and based on an advanced methodology for predictive analysis of flow-induced vibration of tube bundles subject to cross-flow has been developed at Electricite de France. The GEVIBUS Workstation is an interactive processor linking modules as: thermalhydraulic computation, parametric finite element builder, interface between finite element model, thermalhydraulic code and vibratory response computations, refining modelling of fluid-elastic and random forces, linear and non-linear dynamic response and the coupled fluid-structure system, evaluation of tube damage due to fatigue and wear, graphical outputs. Two practical applications are also presented in the paper; the first simulation refers to an experimental set-up consisting of a straight tube bundle subject to water cross-flow, while the second one deals with an industrial configuration which has been observed in some operating steam generators i.e., top tube support plate degradation. In the first case the GEVIBUS predictions in terms of tube displacement time histories and phase planes have been found in very good agreement with experiment. In the second application the GEVIBUS computation showed that a tube with localized degradation is much more stable than a tube located in an extended degradation zone. Important conclusions are also drawn concerning maintenance. (author)

  13. A workstation-integrated peer review quality assurance program: pilot study

    International Nuclear Information System (INIS)

    O’Keeffe, Margaret M; Davis, Todd M; Siminoski, Kerry

    2013-01-01

    The surrogate indicator of radiological excellence that has become accepted is consistency of assessments between radiologists, and the technique that has become the standard for evaluating concordance is peer review. This study describes the results of a workstation-integrated peer review program in a busy outpatient radiology practice. Workstation-based peer review was performed using the software program Intelerad Peer Review. Cases for review were randomly chosen from those being actively reported. If an appropriate prior study was available, and if the reviewing radiologist and the original interpreting radiologist had not exceeded review targets, the case was scored using the modified RADPEER system. There were 2,241 cases randomly assigned for peer review. Of selected cases, 1,705 (76%) were interpreted. Reviewing radiologists agreed with prior reports in 99.1% of assessments. Positive feedback (score 0) was given in three cases (0.2%) and concordance (scores of 0 to 2) was assigned in 99.4%, similar to reported rates of 97.0% to 99.8%. Clinically significant discrepancies (scores of 3 or 4) were identified in 10 cases (0.6%). Eighty-eight percent of reviewed radiologists found the reviews worthwhile, 79% found scores appropriate, and 65% felt feedback was appropriate. Two-thirds of radiologists found case rounds discussing significant discrepancies to be valuable. The workstation-based computerized peer review process used in this pilot project was seamlessly incorporated into the normal workday and met most criteria for an ideal peer review system. Clinically significant discrepancies were identified in 0.6% of cases, similar to published outcomes using the RADPEER system. Reviewed radiologists felt the process was worthwhile

  14. Differences in ergonomic and workstation factors between computer office workers with and without reported musculoskeletal pain.

    Science.gov (United States)

    Rodrigues, Mirela Sant'Ana; Leite, Raquel Descie Veraldi; Lelis, Cheila Maira; Chaves, Thaís Cristina

    2017-01-01

    Some studies have suggested a causal relationship between computer work and the development of musculoskeletal disorders. However, studies considering the use of specific tools to assess workplace ergonomics and psychosocial factors in computer office workers with and without reported musculoskeletal pain are scarce. The aim of this study was to compare the ergonomic, physical, and psychosocial factors in computer office workers with and without reported musculoskeletal pain (MSP). Thirty-five computer office workers (aged 18-55 years) participated in the study. The following evaluations were completed: Rapid Upper Limb Assessment (RULA), Rapid Office Strain Assessment (ROSA), and Maastricht Upper Extremity Questionnaire revised Brazilian Portuguese version (MUEQ-Br revised). Student t-tests were used to make comparisons between groups. The computer office workers were divided into two groups: workers with reported MSP (WMSP, n = 17) and workers without positive report (WOMSP, n = 18). Those in the WMSP group showed significantly greater mean values in the total ROSA score (WMSP: 6.71 [CI95% :6.20-7.21] and WOMSP: 5.88 [CI95% :5.37-6.39], p = 0.01). The WMSP group also showed higher scores in the chair section of the ROSA, workstation of MUEQ-Br revised, and in the upper limb RULA score. The chair height and armrest sections from ROSA showed the higher mean values in workers WMSP compared to workers WOMSP. A positive moderate correlation was observed between ROSA and RULA total scores (R = 0.63, p ergonomics indexes for chair workstation and worse physical risk related to upper limb (RULA upper limb section) than workers without pain. However, there were no observed differences in workers with and without MSP regarding work-related psychosocial factors. The results suggest that inadequate workstation conditions, specifically the chair height, arm and back rest, are linked to improper upper limb postures and that these factors are contributing to

  15. Comparison of radiant and convective cooling of office room: effect of workstation layout

    DEFF Research Database (Denmark)

    Bolashikov, Zhecho Dimitrov; Melikov, Arsen Krikor; Rezgals, Lauris

    2014-01-01

    and compared. The room was furnished with two workstations, two laptops and two thermal manikins resembling occupants. Two heat load levels, design (65 W/m2) and usual (39 W/m2), were generated by adding heat from warm panels simulating solar radiation. Two set-ups were studied: occupants sitting......The impact of heat source location (room layout) on the thermal environment generated in a double office room with four cooling ventilation systems - overhead ventilation, chilled ceiling with overhead ventilation, active chilled beam and active chilled beam with radiant panels was measured...

  16. Microbial Diagnostic Array Workstation (MDAW: a web server for diagnostic array data storage, sharing and analysis

    Directory of Open Access Journals (Sweden)

    Chang Yung-Fu

    2008-09-01

    Full Text Available Abstract Background Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Methods Microbial Diagnostic Array Workstation (MDAW is a database driven application designed in MS Access and front end designed in ASP.NET. Conclusion MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays.

  17. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  18. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER

  19. Implementation of Active Workstations in University Libraries—A Comparison of Portable Pedal Exercise Machines and Standing Desks

    Directory of Open Access Journals (Sweden)

    Camille Bastien Tardif

    2018-06-01

    Full Text Available Sedentary behaviors are an important issue worldwide, as prolonged sitting time has been associated with health problems. Recently, active workstations have been developed as a strategy to counteract sedentary behaviors. The present study examined the rationale and perceptions of university students’ and staff following their first use of an active workstation in library settings. Ninety-nine volunteers completed a self-administered questionnaire after using a portable pedal exercise machine (PPEM or a standing desk (SD. Computer tasks were performed on the SD (p = 0.001 and paperwork tasks on a PPEM (p = 0.037 to a larger extent. Men preferred the SD and women chose the PPEM (p = 0.037. The appreciation of the PPEM was revealed to be higher than for the SD, due to its higher scores for effective, useful, functional, convenient, and comfortable dimensions. Younger participants (<25 years of age found the active workstation more pleasant to use than older participants, and participants who spent between 4 to 8 h per day in a seated position found active workstations were more effective and convenient than participants sitting fewer than 4 h per day. The results of this study are a preliminary step to better understanding the feasibility and acceptability of active workstations on university campuses.

  20. Direct and iterative algorithms for the parallel solution of the one-dimensional macroscopic Navier-Stokes equations

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1986-01-01

    Current efforts are under way to develop and evaluate numerical algorithms for the parallel solution of the large sparse matrix equations associated with the finite difference representation of the macroscopic Navier-Stokes equations. Previous work has shown that these equations can be cast into smaller coupled matrix equations suitable for solution utilizing multiple computer processors operating in parallel. The individual processors themselves may exhibit parallelism through the use of vector pipelines. This wor, has concentrated on the one-dimensional drift flux form of the Navier-Stokes equations. Direct and iterative algorithms that may be suitable for implementation on parallel computer architectures are evaluated in terms of accuracy and overall execution speed. This work has application to engineering and training simulations, on-line process control systems, and engineering workstations where increased computational speeds are required

  1. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  2. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  3. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  4. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  5. A computer graphics pilot project - Spacecraft mission support with an interactive graphics workstation

    Science.gov (United States)

    Hagedorn, John; Ehrner, Marie-Jacqueline; Reese, Jodi; Chang, Kan; Tseng, Irene

    1986-01-01

    The NASA Computer Graphics Pilot Project was undertaken to enhance the quality control, productivity and efficiency of mission support operations at the Goddard Operations Support Computing Facility. The Project evolved into a set of demonstration programs for graphics intensive simulated control room operations, particularly in connection with the complex space missions that began in the 1980s. Complex mission mean more data. Graphic displays are a means to reduce the probabilities of operator errors. Workstations were selected with 1024 x 768 pixel color displays controlled by a custom VLSI chip coupled to an MC68010 chip running UNIX within a shell that permits operations through the medium of mouse-accessed pulldown window menus. The distributed workstations run off a host NAS 8040 computer. Applications of the system for tracking spacecraft orbits and monitoring Shuttle payload handling illustrate the system capabilities, noting the built-in capabilities of shifting the point of view and rotating and zooming in on three-dimensional views of spacecraft.

  6. Evaluation of total workstation CT interpretation quality: a single-screen pilot study

    Science.gov (United States)

    Beard, David V.; Perry, John R.; Muller, Keith E.; Misra, Ram B.; Brown, P.; Hemminger, Bradley M.; Johnston, Richard E.; Mauro, J. Matthew; Jaques, P. F.; Schiebler, M.

    1991-07-01

    An interpretation report, generated with an electronic viewbox, is affected by two factors: image quality, which encompasses what can be seen on the display, and computer human interaction (CHI), which accounts for the cognitive load effect of locating, moving, and manipulating images with the workstation controls. While a number of subject experiments have considered image quality, only recently has the affect of CHI on total interpretation quality been measured. This paper presents the results of a pilot study conducted to evaluate the total interpretation quality of the FilmPlane2.2 radiology workstation for patient folders containing single forty-slice CT studies. First, radiologists interpreted cases and dictated reports using FilmPlane2.2. Requisition forms were provided. Film interpretation was provided by the original clinical report and interpretation forms generated from a previous experiment. Second, an evaluator developed a list of findings for each case based on those listed in all the reports for each case and then evaluated each report for its response on each finding. Third, the reports were compared to determine how well they agreed with one another. Interpretation speed and observation data was also gathered.

  7. Automated processing of forensic casework samples using robotic workstations equipped with nondisposable tips: contamination prevention.

    Science.gov (United States)

    Frégeau, Chantal J; Lett, C Marc; Elliott, Jim; Yensen, Craig; Fourney, Ron M

    2008-05-01

    An automated process has been developed for the analysis of forensic casework samples using TECAN Genesis RSP 150/8 or Freedom EVO liquid handling workstations equipped exclusively with nondisposable tips. Robot tip cleaning routines have been incorporated strategically within the DNA extraction process as well as at the end of each session. Alternative options were examined for cleaning the tips and different strategies were employed to verify cross-contamination. A 2% sodium hypochlorite wash (1/5th dilution of the 10.8% commercial bleach stock) proved to be the best overall approach for preventing cross-contamination of samples processed using our automated protocol. The bleach wash steps do not adversely impact the short tandem repeat (STR) profiles developed from DNA extracted robotically and allow for major cost savings through the implementation of fixed tips. We have demonstrated that robotic workstations equipped with fixed pipette tips can be used with confidence with properly designed tip washing routines to process casework samples using an adapted magnetic bead extraction protocol.

  8. Progress of data processing system in JT-60 utilizing the UNIX-based workstations

    International Nuclear Information System (INIS)

    Sakata, Shinya; Kiyono, Kimihiro; Oshima, Takayuki; Sato, Minoru; Ozeki, Takahisa

    2007-07-01

    JT-60 data processing system (DPS) possesses three-level hierarchy. At the top level of hierarchy is JT-60 inter-shot processor (MSP-ISP), which is a mainframe computer, provides communication with the JT-60 supervisory control system and supervises the internal communication inside the DPS. The middle level of hierarchy has minicomputers and the bottom level of hierarchy has individual diagnostic subsystems, which consist of the CAMAC and VME modules. To meet the demand for advanced diagnostics, the DPS has been progressed in stages from a three-level hierarchy system, which was dependent on the processing power of the MSP-ISP, to a two-level hierarchy system, which is decentralized data processing system (New-DPS) by utilizing the UNIX-based workstations and network technology. This replacement had been accomplished, and the New-DPS has been started to operate in October 2005. In this report, we describe the development and improvement of the New-DPS, whose functions were decentralized from the MSP-ISP to the UNIX-based workstations. (author)

  9. SunFast: A sun workstation based, fuel analysis scoping tool for pressurized water reactors

    International Nuclear Information System (INIS)

    Bohnhoff, W.J.

    1991-05-01

    The objective of this research was to develop a fuel cycle scoping program for light water reactors and implement the program on a workstation class computer. Nuclear fuel management problems are quite formidable due to the many fuel arrangement options available. Therefore, an engineer must perform multigroup diffusion calculations for a variety of different strategies in order to determine an optimum core reload. Standard fine mesh finite difference codes result in a considerable computational cost. A better approach is to build upon the proven reliability of currently available mainframe computer programs, and improve the engineering efficiency by taking advantage of the most useful characteristic of workstations: enhanced man/machine interaction. This dissertation contains a description of the methods and a user's guide for the interactive fuel cycle scoping program, SunFast. SunFast provides computational speed and accuracy of solution along with a synergetic coupling between the user and the machine. It should prove to be a valuable tool when extensive sets of similar calculations must be done at a low cost as is the case for assessing fuel management strategies. 40 refs

  10. Graphical user interface for a robotic workstation in a surgical environment.

    Science.gov (United States)

    Bielski, A; Lohmann, C P; Maier, M; Zapp, D; Nasseri, M A

    2016-08-01

    Surgery using a robotic system has proven to have significant potential but is still a highly challenging task for the surgeon. An eye surgery assistant has been developed to eliminate the problem of tremor caused by human motions endangering the outcome of ophthalmic surgery. In order to exploit the full potential of the robot and improve the workflow of the surgeon, providing the ability to change control parameters live in the system as well as the ability to connect additional ancillary systems is necessary. Additionally the surgeon should always be able to get an overview over the status of all systems with a quick glance. Therefore a workstation has been built. The contribution of this paper is the design and the implementation of an intuitive graphical user interface for this workstation. The interface has been designed with feedback from surgeons and technical staff in order to ensure its usability in a surgical environment. Furthermore, the system was designed with the intent of supporting additional systems with minimal additional effort.

  11. Integrating UNIX workstation into existing online data acquisition systems for Fermilab experiments

    International Nuclear Information System (INIS)

    Oleynik, G.

    1991-03-01

    With the availability of cost effective computing prior from multiple vendors of UNIX workstations, experiments at Fermilab are adding such computers to their VMS based online data acquisition systems. In anticipation of this trend, we have extended the software products available in our widely used VAXONLINE and PANDA data acquisition software systems, to provide support for integrating these workstations into existing distributed online systems. The software packages we are providing pave the way for the smooth migration of applications from the current Data Acquisition Host and Monitoring computers running the VMS operating systems, to UNIX based computers of various flavors. We report on software for Online Event Distribution from VAXONLINE and PANDA, integration of Message Reporting Facilities, and a framework under UNIX for experiments to monitor and view the raw event data produced at any level in their DA system. We have developed software that allows host UNIX computers to communicate with intelligent front-end embedded read-out controllers and processor boards running the pSOS operating system. Both RS-232 and Ethernet control paths are supported. This enables calibration and hardware monitoring applications to be migrated to these platforms. 6 refs., 5 figs

  12. Children and computer use in the home: workstations, behaviors and parental attitudes.

    Science.gov (United States)

    Kimmerly, Lisa; Odell, Dan

    2009-01-01

    This study examines the home computer use of 26 children (aged 6-18) in ten upper middle class families using direct observation, typing tests, questionnaires and semi-structured interviews. The goals of the study were to gather information on how children use computers in the home and to understand how both parents and children perceive this computer use. Large variations were seen in computing skills, behaviors, and opinions, as well as equipment and workstation setups. Typing speed averaged over 40 words per minute for children over 13 years old, and less than 10 words per minute for children younger than 10. The results show that for this sample, Repetitive Stress Injury (RSI) concerns ranked very low among parents, whereas security and privacy concerns ranked much higher. Meanwhile, children's behaviors and workstations were observed to place children in awkward working postures. Photos showing common postures are presented. The greatest opportunity to improve children's work postures appears to be in providing properly-sized work surfaces and chairs, as well as education. Possible explanations for the difference between parental perception of computing risks and the physical reality of children's observed ergonomics are discussed and ideas for further research are proposed.

  13. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  14. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software. Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  15. Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.

    Science.gov (United States)

    Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana

    2017-11-01

    RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Active workstation allows office workers to work efficiently while sitting and exercising moderately.

    Science.gov (United States)

    Koren, Katja; Pišot, Rado; Šimunič, Boštjan

    2016-05-01

    To determine the effects of a moderate-intensity active workstation on time and error during simulated office work. The aim of the study was to analyse simultaneous work and exercise for non-sedentary office workers. We monitored oxygen uptake, heart rate, sweating stains area, self-perceived effort, typing test time with typing error count and cognitive performance during 30 min of exercise with no cycling or cycling at 40 and 80 W. Compared baseline, we found increased physiological responses at 40 and 80 W, which corresponds to moderate physical activity (PA). Typing time significantly increased by 7.3% (p = 0.002) in C40W and also by 8.9% (p = 0.011) in C80W. Typing error count and cognitive performance were unchanged. Although moderate intensity exercise performed on cycling workstation during simulated office tasks increases working task execution time with, it has moderate effect size; however, it does not increase the error rate. Participants confirmed that such a working design is suitable for achieving the minimum standards for daily PA during work hours. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  18. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  19. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  20. SU-F-T-256: 4D IMRT Planning Using An Early Prototype GPU-Enabled Eclipse Workstation

    Energy Technology Data Exchange (ETDEWEB)

    Hagan, A; Modiri, A; Sawant, A [University of Maryland in Baltimore, Baltimore, MD (United States); Svatos, M [Varian Medical Systems, Palo Alto, CA (United States)

    2016-06-15

    Purpose: True 4D IMRT planning, based on simultaneous spatiotemporal optimization has been shown to significantly improve plan quality in lung radiotherapy. However, the high computational complexity associated with such planning represents a significant barrier to widespread clinical deployment. We introduce an early prototype GPU-enabled Eclipse workstation for inverse planning. To our knowledge, this is the first GPUintegrated Eclipse system demonstrating the potential for clinical translation of GPU computing on a major commercially-available TPS. Methods: The prototype system comprised of four NVIDIA Tesla K80 GPUs, with a maximum processing capability of 8.5 Tflops per K80 card. The system architecture consisted of three key modules: (i) a GPU-based inverse planning module using a highly-parallelizable, swarm intelligence-based global optimization algorithm, (ii) a GPU-based open-source b-spline deformable image registration module, Elastix, and (iii) a CUDA-based data management module. For evaluation, aperture fluence weights in an IMRT plan were optimized over 9 beams,166 apertures and 10 respiratory phases (14940 variables) for a lung cancer case (GTV = 95 cc, right lower lobe, 15 mm cranio-caudal motion). Sensitivity of the planning time and memory expense to parameter variations was quantified. Results: GPU-based inverse planning was significantly accelerated compared to its CPU counterpart (36 vs 488 min, for 10 phases, 10 search agents and 10 iterations). The optimized IMRT plan significantly improved OAR sparing compared to the original internal target volume (ITV)-based clinical plan, while maintaining prescribed tumor coverage. The dose-sparing improvements were: Esophagus Dmax 50%, Heart Dmax 42% and Spinal cord Dmax 25%. Conclusion: Our early prototype system demonstrates that through massive parallelization, computationally intense tasks such as 4D treatment planning can be accomplished in clinically feasible timeframes. With further

  1. SU-F-T-256: 4D IMRT Planning Using An Early Prototype GPU-Enabled Eclipse Workstation

    International Nuclear Information System (INIS)

    Hagan, A; Modiri, A; Sawant, A; Svatos, M

    2016-01-01

    Purpose: True 4D IMRT planning, based on simultaneous spatiotemporal optimization has been shown to significantly improve plan quality in lung radiotherapy. However, the high computational complexity associated with such planning represents a significant barrier to widespread clinical deployment. We introduce an early prototype GPU-enabled Eclipse workstation for inverse planning. To our knowledge, this is the first GPUintegrated Eclipse system demonstrating the potential for clinical translation of GPU computing on a major commercially-available TPS. Methods: The prototype system comprised of four NVIDIA Tesla K80 GPUs, with a maximum processing capability of 8.5 Tflops per K80 card. The system architecture consisted of three key modules: (i) a GPU-based inverse planning module using a highly-parallelizable, swarm intelligence-based global optimization algorithm, (ii) a GPU-based open-source b-spline deformable image registration module, Elastix, and (iii) a CUDA-based data management module. For evaluation, aperture fluence weights in an IMRT plan were optimized over 9 beams,166 apertures and 10 respiratory phases (14940 variables) for a lung cancer case (GTV = 95 cc, right lower lobe, 15 mm cranio-caudal motion). Sensitivity of the planning time and memory expense to parameter variations was quantified. Results: GPU-based inverse planning was significantly accelerated compared to its CPU counterpart (36 vs 488 min, for 10 phases, 10 search agents and 10 iterations). The optimized IMRT plan significantly improved OAR sparing compared to the original internal target volume (ITV)-based clinical plan, while maintaining prescribed tumor coverage. The dose-sparing improvements were: Esophagus Dmax 50%, Heart Dmax 42% and Spinal cord Dmax 25%. Conclusion: Our early prototype system demonstrates that through massive parallelization, computationally intense tasks such as 4D treatment planning can be accomplished in clinically feasible timeframes. With further

  2. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  3. A Parallel Processing Algorithm for Remote Sensing Classification

    Science.gov (United States)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  4. Stampi: a message passing library for distributed parallel computing. User's guide, second edition

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Koide, Hiroshi; Takemiya, Hiroshi

    2000-02-01

    A new message passing library, Stampi, has been developed to realize a computation with different kind of parallel computers arbitrarily and making MPI (Message Passing Interface) as an unique interface for communication. Stampi is based on the MPI2 specification, and it realizes dynamic process creation to different machines and communication between spawned one within the scope of MPI semantics. Main features of Stampi are summarized as follows: (i) an automatic switch function between external- and internal communications, (ii) a message routing/relaying with a routing module, (iii) a dynamic process creation, (iv) a support of two types of connection, Master/Slave and Client/Server, (v) a support of a communication with Java applets. Indeed vendors implemented MPI libraries as a closed system in one parallel machine or their systems, and did not support both functions; process creation and communication to external machines. Stampi supports both functions and enables us distributed parallel computing. Currently Stampi has been implemented on COMPACS (COMplex PArallel Computer System) introduced in CCSE, five parallel computers and one graphic workstation, moreover on eight kinds of parallel machines, totally fourteen systems. Stampi provides us MPI communication functionality on them. This report describes mainly the usage of Stampi. (author)

  5. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  6. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  7. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  8. Validation of COG10 and ENDFB6R7 on the Auk Workstation for General Application to Highly Enriched Uranium Systems

    Energy Technology Data Exchange (ETDEWEB)

    Percher, Catherine G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-08-08

    The COG 10 code package1 on the Auk workstation is now validated with the ENBFB6R7 neutron cross section library for general application to highly enriched uranium (HEU) systems by comparison of the calculated keffective to the expected keffective of several relevant experimental benchmarks. This validation is supplemental to the installation and verification of COG 10 on the Auk workstation2.

  9. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  10. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  11. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  12. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  13. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  14. MC++: A parallel, portable, Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms

  15. A new workstation based man/machine interface system for the JT-60 Upgrade

    International Nuclear Information System (INIS)

    Yonekawa, I.; Shimono, M.; Totsuka, T.; Yamagishi, K.

    1992-01-01

    Development of a new man/machine interface system was stimulated by the requirements of making the JT-60 operator interface more 'friendly' on the basis of the past five-year operational experience. Eleven Sun/3 workstations and their supervisory mini-computer HIDIC V90/45 are connected through the standard network; Ethernet. The network is also connected to the existing 'ZENKEI' mini-computer system through the shared memory on the HIDIC V90/45 mini-computer. Improved software, such as automatic setting of the discharge conditions, consistency check among the related parameters and easy operation for discharge result data display, offered the 'user-friendly' environments. This new man/machine interface system leads to the efficient operation of the JT-60. (author)

  16. How to Protect Patients Digital Images/Thermograms Stored on a Local Workstation

    Directory of Open Access Journals (Sweden)

    J. Živčák

    2010-01-01

    Full Text Available To ensure the security and privacy of patient electronic medical information stored on local workstations in doctors’ offices, clinic centers, etc., it is necessary to implement a secure and reliable method for logging on and accessing this information. Biometrically-based identification technologies use measurable personal properties (physiological or behavioral such as a fingerprint in order to identify or verify a person’s identity, and provide the foundation for highly secure personal identification, verification and/or authentication solutions. The use of biometric devices (fingerprint readers is an easy and secure way to log on to the system. We have provided practical tests on HP notebooks that have the fingerprint reader integrated. Successful/failed logons have been monitored and analyzed, and calculations have been made. This paper presents the false rejection rates, false acceptance rates and failure to acquire rates.

  17. Semmelweis revisited: hand hygiene and nosocomial disease transmission in the anesthesia workstation.

    Science.gov (United States)

    Biddle, Chuck

    2009-06-01

    Hospital-acquired infections occur at an alarmingly high frequency, possibly affecting as many as 1 in 10 patients, resulting in a staggering morbidity and an annual mortality of many tens of thousands of patients. Appropriate hand hygiene is highly effective and represents the simplest approach that we have to preventing nosocomial infections. The Agency for Healthcare Research and Quality has targeted hand-washing compliance as a top research agenda item for patient safety. Recent research has identified inadequate hand washing and contaminated anesthesia workstation issues as likely contributors to nosocomial infections, finding aseptic practices highly variable among providers. It is vital that all healthcare providers, including anesthesia providers, appreciate the role of inadequate hand hygiene in nosocomial infection and meticulously follow the mandates of the American Association of Nurse Anesthetists and other professional healthcare organizations.

  18. Development of Neutron and Photon Shielding Calculation System for Workstation (NPSS-W)

    International Nuclear Information System (INIS)

    Shimizu, Yoshio; Nojiri, Ichiro; Odajima, Akira; Sasaki, Toshihisa; Kurosawa, Naohiro

    1998-01-01

    In plant designs and safety evaluations of nuclear fuel cycle facilities, it is important to evaluate the direct radiation and the skyshine (air-scattered photon radiation) from facilities reasonably. The Neutron and Photon Shielding Calculation System for Workstation (NPSS-W) was developed. The NPSS-W can carry out the shielding calculations of the photon and the neutron easily and rapidly. The NPSS-W can easily calculate the radiation source intensity by ORIGEN-S and the dose equivalent rate by SN transport calculational codes, which are ANISN and DOT3.5. The NPSS-W consists of five modules, which named CAL1, CAL2, CAL3, CAL4, CAL5). Some kinds of shielding calculational systems are calculated. The user's manual of NPSS-W, the examples of calculations for each module and the output data are appended. (author)

  19. Using a Cray Y-MP as an array processor for a RISC Workstation

    Science.gov (United States)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  20. Development of a data acquisition system using a RISC/UNIXTM workstation

    International Nuclear Information System (INIS)

    Takeuchi, Y.; Tanimori, T.; Yasu, Y.

    1993-01-01

    We have developed a compact data acquisition system on RISC/UNIX workstations. A SUN TM SPARCstation TM IPC was used, in which an extension bus 'SBus TM ' was linked to a VMEbus. The transfer rate achieved was better than 7 Mbyte/s between the VMEbus and the SUN. A device driver for CAMAC was developed in order to realize an interruptive feature in UNIX. In addition, list processing has been incorporated in order to keep the high priority of the data handling process in UNIX. The successful developments of both device driver and list processing have made it possible to realize the good real-time feature on the RISC/UNIX system. Based on this architecture, a portable and versatile data taking system has been developed, which consists of a graphical user interface, I/O handler, user analysis process, process manager and a CAMAC device driver. (orig.)

  1. Can We Afford These Affordances? GarageBand and the Double-Edged Sword of the Digital Audio Workstation

    Science.gov (United States)

    Bell, Adam Patrick

    2015-01-01

    The proliferation of computers, tablets, and smartphones has resulted in digital audio workstations (DAWs) such as GarageBand in being some of the most widely distributed musical instruments. Positing that software designers are dictating the music education of DAW-dependent music-makers, I examine the fallacy that music-making applications such…

  2. Time synchronization algorithm of distributed system based on server time-revise and workstation self-adjust

    International Nuclear Information System (INIS)

    Zhou Shumin; Sun Yamin; Tang Bin

    2007-01-01

    In order to enhance the time synchronization quality of the distributed system, a time synchronization algorithm of distributed system based on server time-revise and workstation self-adjust is proposed. The time-revise cycle and self-adjust process is introduced in the paper. The algorithm reduces network flow effectively and enhances the quality of clock-synchronization. (authors)

  3. Desk-based workers' perspectives on using sit-stand workstations: a qualitative analysis of the Stand@Work study

    NARCIS (Netherlands)

    Chau, J.Y.; Daley, M.; Srinivasan, A.; Dunn, S.; Bauman, A.E.; van der Ploeg, H.P.

    2014-01-01

    Background: Prolonged sitting time has been identified as a health risk factor. Sit-stand workstations allow desk workers to alternate between sitting and standing throughout the working day, but not much is known about their acceptability and feasibility. Hence, the aim of this study was to

  4. Effect of Active Workstation on Energy Expenditure and Job Performance: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Cao, Chunmei; Liu, Yu; Zhu, Weimo; Ma, Jiangjun

    2016-05-01

    Recently developed active workstation could become a potential means for worksite physical activity and wellness promotion. The aim of this review was to quantitatively examine the effectiveness of active workstation in energy expenditure and job performance. The literature search was conducted in 6 databases (PubMed, SPORTDiscuss, Web of Science, ProQuest, ScienceDirect, and Scopuse) for articles published up to February 2014, from which a systematic review and meta-analysis was conducted. The cumulative analysis for EE showed there was significant increase in EE using active workstation [mean effect size (MES): 1.47; 95% confidence interval (CI): 1.22 to 1.72, P job performance indicated 2 findings: (1) active workstation did not affect selective attention, processing speed, speech quality, reading comprehension, interpretation and accuracy of transcription; and (2) it could decrease the efficiency of typing speed (MES: -0.55; CI: -0.88 to -0.21, P job performance were significantly lower, others were not. As a result there was little effect on real-life work productivity if we made a good arrangement of job tasks.

  5. Feasibility of an Integrated Expert Video Authoring Workstation for Low-Cost Teacher Produced CBI. SBIR Phase I: Final Report.

    Science.gov (United States)

    IntelliSys, Inc., Syracuse, NY.

    This was Phase I of a three-phased project. This phase of the project investigated the feasibility of a computer-based instruction (CBI) workstation, designed for use by teachers of handicapped students within a school structure. This station is to have as a major feature the ability to produce in-house full-motion video using one of the…

  6. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  7. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  8. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Thermal load at workstations in the underground coal mining: Results of research carried out in 6 coal mines

    Directory of Open Access Journals (Sweden)

    Krzysztof Słota

    2016-08-01

    Full Text Available Background: Statistics shows that almost half of Polish extraction in underground mines takes place at workstations where temperature exceeds 28°C. The number of employees working in such conditions is gradually increasing, therefore, the problem of safety and health protection is still growing. Material and Methods: In the present study we assessed the heat load of employees at different workstations in the mining industry, taking into account current thermal conditions and work costs. The evaluation of energy cost of work was carried out in 6 coal mines. A total of 221 miners employed at different workstations were assessed. Individual groups of miners were characterized and thermal safety of the miners was assessed relying on thermal discomfort index. Results: The results of this study indicate considerable differences in the durations of analyzed work processes at individual workstations. The highest average energy cost was noted during the work performed in the forehead. The lowest value was found in the auxiliary staff. The calculated index of discomfort clearly indicated numerous situations in which the admissible range of thermal load exceeded the parameters of thermal load safe for human health. It should be noted that the values of average labor cost fall within the upper, albeit admissible, limits of thermal load. Conclusions: The results of the study indicate that in some cases work in mining is performed in conditions of thermal discomfort. Due to high variability and complexity of work conditions it becomes necessary to verify the workers’ load at different workstations, which largely depends on the environmental conditions and work organization, as well as on the performance of workers themselves. Med Pr 2016;67(4:477–498

  10. Intraoperative non-record-keeping usage of anesthesia information management system workstations and associated hemodynamic variability and aberrancies.

    Science.gov (United States)

    Wax, David B; Lin, Hung-Mo; Reich, David L

    2012-12-01

    Anesthesia information management system workstations in the anesthesia workspace that allow usage of non-record-keeping applications could lead to distraction from patient care. We evaluated whether non-record-keeping usage of the computer workstation was associated with hemodynamic variability and aberrancies. Auditing data were collected on eight anesthesia information management system workstations and linked to their corresponding electronic anesthesia records to identify which application was active at any given time during the case. For each case, the periods spent using the anesthesia information management system record-keeping module were separated from those spent using non-record-keeping applications. The variability of heart rate and blood pressure were also calculated, as were the incidence of hypotension, hypertension, and tachycardia. Analysis was performed to identify whether non-record-keeping activity was a significant predictor of these hemodynamic outcomes. Data were analyzed for 1,061 cases performed by 171 clinicians. Median (interquartile range) non-record-keeping activity time was 14 (1, 38) min, representing 16 (3, 33)% of a median 80 (39, 143) min of procedure time. Variables associated with greater non-record-keeping activity included attending anesthesiologists working unassisted, longer case duration, lower American Society of Anesthesiologists status, and general anesthesia. Overall, there was no independent association between non-record-keeping workstation use and hemodynamic variability or aberrancies during anesthesia either between cases or within cases. Anesthesia providers spent sizable portions of case time performing non-record-keeping applications on anesthesia information management system workstations. This use, however, was not independently associated with greater hemodynamic variability or aberrancies in patients during maintenance of general anesthesia for predominantly general surgical and gynecologic procedures.

  11. [From data entry to data presentation at a clinical workstation--experiences with Anesthesia Information Management Systems (AIMS)].

    Science.gov (United States)

    Benson, M; Junger, A; Quinzio, L; Michel, A; Sciuk, G; Fuchs, C; Marquardt, K; Hempelmannn, G

    2000-09-01

    Anesthesia Information Management Systems (AIMS) are required to supply large amounts of data for various purposes such as performance recording, quality assurance, training, operating room management and research. It was our objective to establish an AIMS that enables every member of the department to independently access queries at his/her work station and at the same time allows the presentation of data in a suitable manner in order to increase the transfer of different information to the clinical workstation. Apple Macintosh Clients (Apple Computer, Inc. Cupertino, California) and the file- and database servers were installed into the already partially existing hospital network. The most important components installed on each computer are the anesthesia documenting software NarkoData (ProLogic GmbH, Erkrath), HIS client software and a HTML browser. More than 250 queries for easy evaluation were formulated with the software Voyant (Brossco Systems, Espoo, Finland). Together with the documentation they are the evaluation module of the AIMS. Today, more than 20,000 anesthesia procedures are recorded each year at 112 decentralised workstations with the AIMS. In 1998, 90.8% of the 20,383 performed anesthetic procedures were recorded online and 9.2% entered postopeatively into the system. With a corresponding user access it is possible to receive all available patient data at each single anesthesiological workstation via HIS (diagnoses, laboratory results) anytime. The available information includes previous anesthesia records, statistics and all data available from the hospitals intranet. This additional information is of great advantage in comparison to previous working conditions. The implementation of an AIMS allowed to greatly enhance the quota but also the quality of documentation and an increased flow of information at the anesthesia workstation. The circuit between data entry and the presentation and evaluation of data, statistics and results directly

  12. Space-charge-dominated beam dynamics simulations using the massively parallel processors (MPPs) of the Cray T3D

    International Nuclear Information System (INIS)

    Liu, H.

    1996-01-01

    Computer simulations using the multi-particle code PARMELA with a three-dimensional point-by-point space charge algorithm have turned out to be very helpful in supporting injector commissioning and operations at Thomas Jefferson National Accelerator Facility (Jefferson Lab, formerly called CEBAF). However, this algorithm, which defines a typical N 2 problem in CPU time scaling, is very time-consuming when N, the number of macro-particles, is large. Therefore, it is attractive to use massively parallel processors (MPPs) to speed up the simulations. Motivated by this, the authors modified the space charge subroutine for using the MPPs of the Cray T3D. The techniques used to parallelize and optimize the code on the T3D are discussed in this paper. The performance of the code on the T3D is examined in comparison with a Parallel Vector Processing supercomputer of the Cray C90 and an HP 735/15 high-end workstation

  13. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  14. Parallelization of the MAAP-A code neutronics/thermal hydraulics coupling

    International Nuclear Information System (INIS)

    Froehle, P.H.; Wei, T.Y.C.; Weber, D.P.; Henry, R.E.

    1998-01-01

    A major new feature, one-dimensional space-time kinetics, has been added to a developmental version of the MAAP code through the introduction of the DIF3D-K module. This code is referred to as MAAP-A. To reduce the overall job time required, a capability has been provided to run the MAAP-A code in parallel. The parallel version of MAAP-A utilizes two machines running in parallel, with the DIF3D-K module executing on one machine and the rest of the MAAP-A code executing on the other machine. Timing results obtained during the development of the capability indicate that reductions in time of 30--40% are possible. The parallel version can be run on two SPARC 20 (SUN OS 5.5) workstations connected through the ethernet. MPI (Message Passing Interface standard) needs to be implemented on the machines. If necessary the parallel version can also be run on only one machine. The results obtained running in this one-machine mode identically match the results obtained from the serial version of the code

  15. GPU: the biggest key processor for AI and parallel processing

    Science.gov (United States)

    Baji, Toru

    2017-07-01

    Two types of processors exist in the market. One is the conventional CPU and the other is Graphic Processor Unit (GPU). Typical CPU is composed of 1 to 8 cores while GPU has thousands of cores. CPU is good for sequential processing, while GPU is good to accelerate software with heavy parallel executions. GPU was initially dedicated for 3D graphics. However from 2006, when GPU started to apply general-purpose cores, it was noticed that this architecture can be used as a general purpose massive-parallel processor. NVIDIA developed a software framework Compute Unified Device Architecture (CUDA) that make it possible to easily program the GPU for these application. With CUDA, GPU started to be used in workstations and supercomputers widely. Recently two key technologies are highlighted in the industry. The Artificial Intelligence (AI) and Autonomous Driving Cars. AI requires a massive parallel operation to train many-layers of neural networks. With CPU alone, it was impossible to finish the training in a practical time. The latest multi-GPU system with P100 makes it possible to finish the training in a few hours. For the autonomous driving cars, TOPS class of performance is required to implement perception, localization, path planning processing and again SoC with integrated GPU will play a key role there. In this paper, the evolution of the GPU which is one of the biggest commercial devices requiring state-of-the-art fabrication technology will be introduced. Also overview of the GPU demanding key application like the ones described above will be introduced.

  16. Parallelization of ultrasonic field simulations for non destructive testing

    International Nuclear Information System (INIS)

    Lambert, Jason

    2015-01-01

    The Non Destructive Testing field increasingly uses simulation. It is used at every step of the whole control process of an industrial part, from speeding up control development to helping experts understand results. During this thesis, a fast ultrasonic field simulation tool dedicated to the computation of an ultrasonic field radiated by a phase array probe in an isotropic specimen has been developed. During this thesis, a simulation tool dedicated to the fast computation of an ultrasonic field radiated by a phased array probe in an isotropic specimen has been developed. Its performance enables an interactive usage. To benefit from the commonly available parallel architectures, a regular model (aimed at removing divergent branching) derived from the generic CIVA model has been developed. First, a reference implementation was developed to validate this model against CIVA results, and to analyze its performance behaviour before optimization. The resulting code has been optimized for three kinds of parallel architectures commonly available in workstations: general purpose processors (GPP), many-core co-processors (Intel MIC) and graphics processing units (nVidia GPU). On the GPP and the MIC, the algorithm was reorganized and implemented to benefit from both parallelism levels, multithreading and vector instructions. On the GPU, the multiple steps of field computing have been divided in multiple successive CUDA kernels. Moreover, libraries dedicated to each architecture were used to speedup Fast Fourier Transforms, Intel MKL on GPP and MIC and nVidia cuFFT on GPU. Performance and hardware adequation of the produced codes were thoroughly studied for each architecture. On multiple realistic control configurations, interactive performance was reached. Perspectives to address more complex configurations were drawn. Finally, the integration and the industrialization of this code in the commercial NDT platform CIVA is discussed. (author) [fr

  17. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    Science.gov (United States)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon

  18. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  19. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  20. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  1. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  2. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  3. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  4. Algorithms for parallel flow solvers on message passing architectures

    Science.gov (United States)

    Vanderwijngaart, Rob F.

    1995-01-01

    The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those

  5. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  6. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  7. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  8. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  9. Preliminary study of diagnostic workstation with different matrix size for detection of small pulmonary nodules

    International Nuclear Information System (INIS)

    Wu Jie; Wang Xuejian; Wang Bo; Tong Juan; Wei Yuqing; Shen Guiquan; Wang Limei; Cao Jun; Sui He

    2004-01-01

    Objective: To assess the influence in detecting small pulmonary nodules (SPNs) on soft-copy images displayed with different matrix sizes. Methods: Seventy-six chest compute radiographs were selected for the study. Of the 76 test images, 36 pulmonary nodules smaller than 20 mm in diameter were proven by CT, which were further divided into two groups: 1.0-2.0 cm and Z values and standard error of three kinds of view system for individual observers. Results: For 1.0-2.0 cm group, the mean A Z values were 0.7936 for DRCS with 2-fold magnification and window technique, 0.8225 for 1 K monitor with 2-fold magnification and window technique, and 0.8367 for 2 K monitor without magnification; for Z values increased slightly as the display matrix size improved, but there were no significant differences among the three sets in the detection of SPNs in the ROC analyses. Conclusion: It is acceptable to detect small pulmonary nodules of 1.0-2.0 cm in diameter on 1 K monitor and DRCS with magnification. High resolution diagnostic workstation is recommended for detecting small pulmonary nodules <1.0 cm in diameter. Reasonable equipment for the detection of subtle abnormality may result in better cost-efficacy and diagnostic accuracy

  10. Using sit-stand workstations to decrease sedentary time in office workers: a randomized crossover trial.

    Science.gov (United States)

    Dutta, Nirjhar; Koepp, Gabriel A; Stovitz, Steven D; Levine, James A; Pereira, Mark A

    2014-06-25

    This study was conducted to determine whether installation of sit-stand desks (SSDs) could lead to decreased sitting time during the workday among sedentary office workers. A randomized cross-over trial was conducted from January to April, 2012 at a business in Minneapolis. 28 (nine men, 26 full-time) sedentary office workers took part in a 4 week intervention period which included the use of SSDs to gradually replace 50% of sitting time with standing during the workday. Physical activity was the primary outcome. Mood, energy level, fatigue, appetite, dietary intake, and productivity were explored as secondary outcomes. The intervention reduced sitting time at work by 21% (95% CI 18%-25%) and sedentary time by 4.8 min/work-hr (95% CI 4.1-5.4 min/work-hr). For a 40 h work-week, this translates into replacement of 8 h of sitting time with standing and sedentary time being reduced by 3.2 h. Activity level during non-work hours did not change. The intervention also increased overall sense of well-being, energy, decreased fatigue, had no impact on productivity, and reduced appetite and dietary intake. The workstations were popular with the participants. The SSD intervention was successful in increasing work-time activity level, without changing activity level during non-work hours.

  11. Using Sit-Stand Workstations to Decrease Sedentary Time in Office Workers: A Randomized Crossover Trial

    Directory of Open Access Journals (Sweden)

    Nirjhar Dutta

    2014-06-01

    Full Text Available Objective: This study was conducted to determine whether installation of sit-stand desks (SSDs could lead to decreased sitting time during the workday among sedentary office workers. Methods: A randomized cross-over trial was conducted from January to April, 2012 at a business in Minneapolis. 28 (nine men, 26 full-time sedentary office workers took part in a 4 week intervention period which included the use of SSDs to gradually replace 50% of sitting time with standing during the workday. Physical activity was the primary outcome. Mood, energy level, fatigue, appetite, dietary intake, and productivity were explored as secondary outcomes. Results: The intervention reduced sitting time at work by 21% (95% CI 18%–25% and sedentary time by 4.8 min/work-hr (95% CI 4.1–5.4 min/work-hr. For a 40 h work-week, this translates into replacement of 8 h of sitting time with standing and sedentary time being reduced by 3.2 h. Activity level during non-work hours did not change. The intervention also increased overall sense of well-being, energy, decreased fatigue, had no impact on productivity, and reduced appetite and dietary intake. The workstations were popular with the participants. Conclusion: The SSD intervention was successful in increasing work-time activity level, without changing activity level during non-work hours.

  12. Effect of Standing or Walking at a Workstation on Cognitive Function: A Randomized Counterbalanced Trial.

    Science.gov (United States)

    Bantoft, Christina; Summers, Mathew J; Tranent, Peter J; Palmer, Matthew A; Cooley, P Dean; Pedersen, Scott J

    2016-02-01

    In the present study, we examined the effect of working while seated, while standing, or while walking on measures of short-term memory, working memory, selective and sustained attention, and information-processing speed. The advent of computer-based technology has revolutionized the adult workplace, such that average adult full-time employees spend the majority of their working day seated. Prolonged sitting is associated with increasing obesity and chronic health conditions in children and adults. One possible intervention to reduce the negative health impacts of the modern office environment involves modifying the workplace to increase incidental activity and exercise during the workday. Although modifications, such as sit-stand desks, have been shown to improve physiological function, there is mixed information regarding the impact of such office modification on individual cognitive performance and thereby the efficiency of the work environment. In a fully counterbalanced randomized control trial, we assessed the cognitive performance of 45 undergraduate students for up to a 1-hr period in each condition. The results indicate that there is no significant change in the measures used to assess cognitive performance associated with working while seated, while standing, or while walking at low intensity. These results indicate that cognitive performance is not degraded with short-term use of alternate workstations. © 2015, Human Factors and Ergonomics Society.

  13. PWR [pressurized water reactor] optimal reload configuration with an intelligent workstation

    International Nuclear Information System (INIS)

    Greek, K.J.; Robinson, A.H.

    1990-01-01

    In a previous paper, the implementation of a pressurized water reactor (PWR) refueling expert system that combined object-oriented programming in Smalltalk and a FORTRAN power calculation to evaluate loading patterns was discussed. The expert system applies heuristics and constraints that lead the search toward an optimal configuration. Its rate of improvement depends on the expertise coded for a search and the loading pattern from where the search begins. Due to its complexity, however, the solution normally cannot be served by a rule-based expert system alone. A knowledge base may take years of development before final acceptance. Also, the human pattern-matching capability to view a two-dimensional power profile, recognize an imbalance, and select an appropriate response has not yet been surpassed by a rule-based system. The user should be given the ability to take control of the search if he believes the solution needs a new direction and should be able to configure a loading pattern and resume the search. This paper introduces the workstation features of Shuffle important to aid the user to manipulate the configuration and retain a record of the solution

  14. Workstation environment supports for startup of YGN 3 and 4 nuclear unit

    International Nuclear Information System (INIS)

    Lee, Won Jae; Kim, Won Bong; Lee, Byung Chae

    1995-07-01

    Light water reactor fuel development division of Korea Atomic Energy Research Institute participated in the installation of the plant computer system and software, and the user support activities of Asea Brown Boveri/Combustion Engineering for the Plant Monitoring System during the startup phase of YGN-3 nuclear unit. The main purpose of the participation is to have the self-reliant plant- computer technology for the independent design and startup of next nuclear units. This report describes the activities performed by KAERI with ABB/CE at the plant site. In addition, it describes the direct transfer of data files between PMS and workstation which was independently carried out by KAERI. Since KAERI should support the site in setting-up the plant computer environment independent of ABB-CE from the next nuclear units, the review was performed for the technical details of activities provided to the site in order to provide the better computer environment in the next nuclear units. In conclusion, this report is expected to provide the technical background for the supporting of plant computing environment and the scope of support work at plant site during Yonggwang 3, 4 startup in the area of plant computer for the next nuclear units. 6 refs. (Author) .new

  15. Optimizing 10-Gigabit Ethernet for Networks of Workstations, Clusters, and Grids: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Wu-chun

    2003-10-13

    This paper presents a case study of the 10-Gigabit Ethernet (10GbE) adapter from Intel(reg sign). Specifically, with appropriate optimizations to the configurations of the 10GbE adapter and TCP, we demonstrate that the 10GbE adapter can perform well in local-area, storage-area, system-area, and wide-area networks. For local-area, storage-area, and system-area networks in support of networks of workstations, network-attached storage, and clusters, respectively, we can achieve over 7-Gb/s end-to-end throughput and 12-{micro}s end-to-end latency between applications running on Linux-based PCs. For the wide-area network in support of grids, we broke the recently-set Internet2 Land Speed Record by 2.5 times by sustaining an end-to-end TCP/IP throughput of 2.38 Gb/s between Sunnyvale, California and Geneva, Switzerland (i.e., 10,037 kilometers) to move over a terabyte of data in less than an hour. Thus, the above results indicate that 10GbE may be a cost-effective solution across a multitude of computing environments.

  16. Integration of model-based control systems with artificial intelligence and workstations

    International Nuclear Information System (INIS)

    Lee, M.; Clearwater, S.

    1987-01-01

    Experience with model based accelerator control started at SPEAR. Since that SPEAR. Since that time nearly all accelerator beam lines have been controlled using model-based application programs, for example, PEP and SLC at SLAC. In order to take advantage of state-of-the-art hardware and software technology, the design and implementation of the accelerator control programs have undergone radical changes with time. Consequently, SPEAR, PEP, and SLC all use different control programs. Since many of these application programs are imbedded deep into the control system, they had to be rewritten each time. Each time this rewriting has occurred a great deal of time and effort has been spent on training physicists and programmers to do the job. Now, these application programs have been developed for a fourth time. This time, however, the programs being developed are generic so that they will not have to be done again. An integrated system called GOLD (Generic Orbit ampersand Lattice Debugger) has been developed for debugging and correcting trajectory errors in accelerator lattices. The system consists of a lattice modeling program (COMFORT), a beam simulator (PLUS), a graphical workstation environment (micro-VAX) and an expert system (ABLE). This paper will describe some of the features and applications of our integrated system with emphasis on the automation offered by expert systems. 5 refs., 4 figs

  17. Feasibility evaluation of 3 automated cellular drug screening assays on a robotic workstation.

    Science.gov (United States)

    Soikkeli, Anne; Sempio, Cristina; Kaukonen, Ann Marie; Urtti, Arto; Hirvonen, Jouni; Yliperttula, Marjo

    2010-01-01

    This study presents the implementation and optimization of 3 cell-based assays on a TECAN Genesis workstation-the Caspase-Glo 3/7 and sulforhodamine B (SRB) screening assays and the mechanistic Caco-2 permeability protocol-and evaluates their feasibility for automation. During implementation, the dispensing speed to add drug solutions and fixative trichloroacetic acid and the aspiration speed to remove the supernatant immediately after fixation were optimized. Decontamination steps for cleaning the tips and pipetting tubing were also added. The automated Caspase-Glo 3/7 screen was successfully optimized with Caco-2 cells (Z' 0.7, signal-to-base ratio [S/B] 1.7) but not with DU-145 cells. In contrast, the automated SRB screen was successfully optimized with the DU-145 cells (Z' 0.8, S/B 2.4) but not with the Caco-2 cells (Z' -0.8, S/B 1.4). The automated bidirectional Caco-2 permeability experiments separated successfully low- and high-permeability compounds (Z' 0.8, S/B 84.2) and passive drug permeation from efflux-mediated transport (Z' 0.5, S/B 8.6). Of the assays, the homogeneous Caspase-Glo 3/7 assay benefits the most from automation, but also the heterogeneous SRB assay and Caco-2 permeability experiments gain advantages from automation.

  18. GOLD: Integration of model-based control systems with artificial intelligence and workstations

    International Nuclear Information System (INIS)

    Lee, M.; Clearwater, S.

    1987-08-01

    Our experience with model-based accelerator control started at SPEAR. Since that time nearly all accelerator beamlines have been controlled using model-based application programs, for example, PEP and SLC at SLAC. In order to take advantage of state-of-the-art hardware and software technology, the design and implementation of the accelerator control programs have undergone radical changes with time. Consequently, SPEAR, PEP and SLC all use different control programs. Since many of these application programs are embedded deep into the control system, they had to be rewritten each time. Each time this rewriting has occurred a great deal of time and effort has been spent on training physicists and programmers to do the job. Now, we have developed an integrated system called GOLD (Genetic Orbit and Lattice Debugger) for debugging and correcting trajectory errors in accelerator lattices. The system consists of a lattice modeling program (COMFORT), a beam simulator (PLUS), a graphical workstation environment (micro-VAX) and an expert system (ABLE). This paper will describe some of the features and applications of our integrated system with emphasis on the automation offered by expert systems. 5 refs

  19. GOLD: Integration of model-based control systems with artificial intelligence and workstations

    International Nuclear Information System (INIS)

    Lee, M.; Clearwater, S.

    1987-08-01

    Our experience with model based accelerator control started at SPEAR. Since that time nearly all accelerator beam lines have been controlled using model-based application programs, for example, PEP and SLC at SLAC. In order to take advantage of state-of-the-art hardware and software technology, the design and implementation of the accelerator control programs have undergone radical change with time. Consequently, SPEAR, PEP, and SLC all use different control programs. Since many of these application programs are imbedded deep into the control system, they had to be rewritten each time. Each time this rewriting has occurred a great deal of time and effort has been spent on training physicists and programmers to do the job. Now, we have developed these application programs for a fourth time. This time, however, the programs we are developing are generic so that we will not have to do it again. We have developed an integrated system called GOLD (Generic Orbit and Lattice Debugger) for debugging and correcting trajectory errors in accelerator lattices. The system consists of a lattice modeling program (COMFORT), a beam simulator (PLUS), a graphical workstation environment (micro-VAX) and an expert system (ABLE). This paper will describe some of the features and applications of our integrated system with emphasis on the automation offered by expert systems. 5 refs

  20. The integrated workstation, a realtime data acquisition, analysis and display system

    International Nuclear Information System (INIS)

    Treadway, T.R. III.

    1991-05-01

    The Integrated Workstation was developed at Lawrence Livermore National Laboratory to consolidate the data from many widely dispersed systems in order to provide an overall indication of the enrichment performance of the Atomic Vapor Laser Isotope Separation experiments. In order to accomplish this task a Hewlett Packard 9000/835 turboSRX was employed to acquire over 150 analog input signals. Following the data acquisition, a spreadsheet-type analysis package and interpreter was used to derive 300 additional values. These values were the results of applying physics models to the raw data. Following the calculations were plotted and archived for post-run analysis and report generation. Both the modeling calculations, and real-time plot configurations can be dynamically reconfigured as needed. Typical sustained data acquisition and display rates of the system was 1 Hz. However rates exceeding 2.5 Hz have been obtained. This paper will discuss the instrumentation, architecture, implementation, usage, and results of this system in a set of experiments that occurred in 1989. 2 figs

  1. Workstation environment supports for startup of YGN 3 and 4 nuclear unit

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Jae; Kim, Won Bong; Lee, Byung Chae [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Light water reactor fuel development division of Korea Atomic Energy Research Institute participated in the installation of the plant computer system and software, and the user support activities of Asea Brown Boveri/Combustion Engineering for the Plant Monitoring System during the startup phase of YGN-3 nuclear unit. The main purpose of the participation is to have the self-reliant plant- computer technology for the independent design and startup of next nuclear units. This report describes the activities performed by KAERI with ABB/CE at the plant site. In addition, it describes the direct transfer of data files between PMS and workstation which was independently carried out by KAERI. Since KAERI should support the site in setting-up the plant computer environment independent of ABB-CE from the next nuclear units, the review was performed for the technical details of activities provided to the site in order to provide the better computer environment in the next nuclear units. In conclusion, this report is expected to provide the technical background for the supporting of plant computing environment and the scope of support work at plant site during Yonggwang 3, 4 startup in the area of plant computer for the next nuclear units. 6 refs. (Author) .new.

  2. Development of a low-cost virtual reality workstation for training and education

    Science.gov (United States)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) it involves 3-dimensional computer graphics; (2) it includes real-time feedback and response to user actions; and (3) it must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, but the high cost of VR technology has limited its practical application to fields with big budgets, such as military combat simulation, commercial pilot training, and certain projects within the space program. However, in the last year there has been a revolution in the cost of VR technology. The speed of inexpensive personal computers has increased dramatically, especially with the introduction of the Pentium processor and the PCI bus for IBM-compatibles, and the cost of high-quality virtual reality peripherals has plummeted. The result is that many public schools, colleges, and universities can afford a PC-based workstation capable of running immersive virtual reality applications. My goal this summer was to assemble and evaluate such a system.

  3. Air distribution in office environment with asymmetric workstation layout using chilled beams

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Hannu; Haeggblom, Henna [Finnish Institute of Occupational Health, Lemminkaeisenkatu 14-18 B, 20520 Turku (Finland); Kosonen, Risto; Ruponen, Mika [Halton Oy, Niittyvillankuja 4, 01510 Vantaa (Finland)

    2010-09-15

    Air flow patterns and mean air speeds were studied under laboratory conditions representing a full scale open-plan office. Three basic conditions were tested: summer, spring/autumn and winter. Chilled beams were used to provide cooling, outdoor air supply and air distribution in the room. The heat sources had a notable influence on the flow pattern in the room causing large scale circulation and affecting the direction of inlet jets. The maximum air speed in the occupied zone was higher than the recommendations. The mean air speed was also high on at the floor level but low on at the head level. The air speed was highest in the summer case under high cooling load. Results indicate that especially with high heat loads, it is difficult to fulfill the targets of the existing standards in practice. Two main sources of draught risk were found: a) downfall of colliding inlet jets causing local maxima of air speed and b) large scale circulation caused by asymmetric layout of chilled beams and heat sources. The first phenomenon can cause local draught risk when the workstation is located in the downfall area. The flow pattern is not stable and the position of draught risk areas can change in time and also due to changes in room heat sources. The second phenomenon can cause more constant high air speeds on at the floor level. CFD-simulation was able to predict the general flow pattern but somewhat overestimated the air speed compared to measurements. (author)

  4. A real-time monitoring/emergency response workstation using a 3-D numerical model initialized with SODAR

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-01-01

    Many workstation based emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, we have implemented the three-dimensional-diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy's Atmospheric Release Advisory Capability project

  5. Computer-aided diagnosis workstation and telemedicine network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2009-02-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. To overcome these problems, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The functions to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and "Success in login" effective. As a result, patients' private information is protected. We can share the screen of Web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with workstation. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and

  6. Portfolio: a prototype workstation for development and evaluation of tools for analysis and management of digital portal images

    International Nuclear Information System (INIS)

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Friedman, Charles P.; Rosenman, Julian G.

    1998-01-01

    Purpose: The purpose of this investigation was to design and implement a prototype physician workstation, called PortFolio, as a platform for developing and evaluating, by means of controlled observer studies, user interfaces and interactive tools for analyzing and managing digital portal images. The first observer study was designed to measure physician acceptance of workstation technology, as an alternative to a view box, for inspection and analysis of portal images for detection of treatment setup errors. Methods and Materials: The observer study was conducted in a controlled experimental setting to evaluate physician acceptance of the prototype workstation technology exemplified by PortFolio. PortFolio incorporates a windows user interface, a compact kit of carefully selected image analysis tools, and an object-oriented data base infrastructure. The kit evaluated in the observer study included tools for contrast enhancement, registration, and multimodal image visualization. Acceptance was measured in the context of performing portal image analysis in a structured protocol designed to simulate clinical practice. The acceptability and usage patterns were measured from semistructured questionnaires and logs of user interactions. Results: Radiation oncologists, the subjects for this study, perceived the tools in PortFolio to be acceptable clinical aids. Concerns were expressed regarding user efficiency, particularly with respect to the image registration tools. Conclusions: The results of our observer study indicate that workstation technology is acceptable to radiation oncologists as an alternative to a view box for clinical detection of setup errors from digital portal images. Improvements in implementation, including more tools and a greater degree of automation in the image analysis tasks, are needed to make PortFolio more clinically practical

  7. Test-retest reliability and concurrent validity of a web-based questionnaire measuring workstation and individual correlates of work postures during computer work

    NARCIS (Netherlands)

    IJmker, S.; Mikkers, J.; Blatter, B.M.; Beek, A.J. van der; Mechelen, W. van; Bongers, P.M.

    2008-01-01

    Introduction: "Ergonomic" questionnaires are widely used in epidemiological field studies to study the association between workstation characteristics, work posture and musculoskeletal disorders among office workers. Findings have been inconsistent regarding the putative adverse effect of work

  8. Increasing physical activity in office workers ? the Inphact Treadmill study; a study protocol for a 13-month randomized controlled trial of treadmill workstations

    OpenAIRE

    Bergman, Frida; Boraxbekk, Carl-Johan; Wennberg, Patrik; S?rlin, Ann; Olsson, Tommy

    2015-01-01

    Background Sedentary behaviour is an independent risk factor for mortality and morbidity, especially for type 2 diabetes. Since office work is related to long periods that are largely sedentary, it is of major importance to find ways for office workers to engage in light intensity physical activity (LPA). The Inphact Treadmill study aims to investigate the effects of installing treadmill workstations in offices compared to conventional workstations. Methods/Design A two-arm, 13-month, randomi...

  9. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  10. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  11. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  12. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  13. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  14. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  15. Parallel Access of Out-Of-Core Dense Extendible Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Rotem, Doron

    2007-07-26

    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.

  16. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  17. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  18. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  19. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  20. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  1. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  2. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  3. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  4. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  5. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  6. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    International Nuclear Information System (INIS)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-01-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated

  7. PARALLEL INTEGRATION ALGORITHM AND ITS USAGE FOR A PRACTICAL SIMULATION OF SPACECRAFT ATTITUDE MOTION

    Directory of Open Access Journals (Sweden)

    Ravil’ Kudermetov

    2018-02-01

    Full Text Available Nowadays multi-core processors are installed almost in each modern workstation, but the question of these computational resources effective utilization is still a topical one. In this paper the four-point block one-step integration method is considered, the parallel algorithm of this method is proposed and the Java programmatic implementation of this algorithm is discussed. The effectiveness of the proposed algorithm is demonstrated by way of spacecraft attitude motion simulation. The results of this work can be used for practical simulation of dynamic systems that are described by ordinary differential equations. The results are also applicable to the development and debugging of computer programs that integrate the dynamic and kinematic equations of the angular motion of a rigid body.

  8. An Adaptive Method For Texture Characterization In Medical Images Implemented on a Parallel Virtual Machine

    Directory of Open Access Journals (Sweden)

    Socrates A. Mylonas

    2003-06-01

    Full Text Available This paper describes the application of a new texture characterization algorithm for the segmentation of medical ultrasound images. The morphology of these images poses significant problems for the application of traditional image processing techniques and their analysis has been the subject of research for several years. The basis of the algorithm is an optimum signal modelling algorithm (Least Mean Squares-based, which estimates a set of parameters from small image regions. The algorithm has been converted to a structure suitable for implementation on a Parallel Virtual Machine (PVM consisting of a Network of Workstations (NoW, to improve processing speed. Tests were initially carried out on standard textured images. This paper describes preliminary results of the application of the algorithm in texture discrimination and segmentation of medical ultrasound images. The images examined are primarily used in the diagnosis of carotid plaques, which are linked to the risk of stroke.

  9. GPAW - massively parallel electronic structure calculations with Python-based software

    DEFF Research Database (Denmark)

    Enkovaara, Jussi; Romero, Nichols A.; Shende, Sameer

    2011-01-01

    of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix...... popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most...... environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges...

  10. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  11. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  12. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  13. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  14. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  15. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  16. Instrument workstation for the EGSE of the Near Infrared Spectro-Photometer instrument (NISP) of the EUCLID mission

    Science.gov (United States)

    Trifoglio, M.; Gianotti, F.; Conforti, V.; Franceschi, E.; Stephen, J. B.; Bulgarelli, A.; Fioretti, V.; Maiorano, E.; Nicastro, L.; Valenziano, L.; Zoli, A.; Auricchio, N.; Balestra, A.; Bonino, D.; Bonoli, C.; Bortoletto, F.; Capobianco, V.; Chiarusi, T.; Corcione, L.; Debei, S.; De Rosa, A.; Dusini, S.; Fornari, F.; Giacomini, F.; Guizzo, G. P.; Ligori, S.; Margiotta, A.; Mauri, N.; Medinaceli, E.; Morgante, G.; Patrizii, L.; Sirignano, C.; Sirri, G.; Sortino, F.; Stanco, L.; Tenti, M.

    2016-07-01

    The NISP instrument on board the Euclid ESA mission will be developed and tested at different levels of integration using various test equipment which shall be designed and procured through a collaborative and coordinated effort. The NISP Instrument Workstation (NI-IWS) will be part of the EGSE configuration that will support the NISP AIV/AIT activities from the NISP Warm Electronics level up to the launch of Euclid. One workstation is required for the NISP EQM/AVM, and a second one for the NISP FM. Each workstation will follow the respective NISP model after delivery to ESA for Payload and Satellite AIV/AIT and launch. At these levels the NI-IWS shall be configured as part of the Payload EGSE, the System EGSE, and the Launch EGSE, respectively. After launch, the NI-IWS will be also re-used in the Euclid Ground Segment in order to support the Commissioning and Performance Verification (CPV) phase, and for troubleshooting purposes during the operational phase. The NI-IWS is mainly aimed at the local storage in a suitable format of the NISP instrument data and metadata, at local retrieval, processing and display of the stored data for on-line instrument assessment, and at the remote retrieval of the stored data for off-line analysis on other computers. We describe the design of the IWS software that will create a suitable interface to the external systems in each of the various configurations envisaged at the different levels, and provide the capabilities required to monitor and verify the instrument functionalities and performance throughout all phases of the NISP lifetime.

  17. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  18. Computer-aided diagnosis workstation and data base system for chest diagnosis based on multihelical CT images

    International Nuclear Information System (INIS)

    Satoh, H.; Niki, N.; Eguchi, K.; Masuda, H.; Machida, S.; Moriyama, N.

    2006-01-01

    We have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router, Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. The results of this study indicate that our computer-aided diagnosis workstation and network system can increase diagnostic speed, diagnostic accuracy and safety of medical information. (author)

  19. Automated methods for single-stranded DNA isolation and dideoxynucleotide DNA sequencing reactions on a robotic workstation

    International Nuclear Information System (INIS)

    Mardis, E.R.; Roe, B.A.

    1989-01-01

    Automated procedures have been developed for both the simultaneous isolation of 96 single-stranded M13 chimeric template DNAs in less than two hours, and for simultaneously pipetting 24 dideoxynucleotide sequencing reactions on a commercially available laboratory workstation. The DNA sequencing results obtained by either radiolabeled or fluorescent methods are consistent with the premise that automation of these portions of DNA sequencing projects will improve the reproducibility of the DNA isolation and the procedures for these normally labor-intensive steps provides an approach for rapid acquisition of large amounts of high quality, reproducible DNA sequence data

  20. Comparison of personal computer with CT workstation in the evaluation of 3-dimensional CT image of the skull

    International Nuclear Information System (INIS)

    Kang, Bok Hee; Kim, Kee Deog; Park, Chang Seo

    2001-01-01

    To evaluate the usefulness of the reconstructed 3-dimensional image on the personal computer in comparison with that of the CT workstation by quantitative comparison and analysis. The spiral CT data obtained from 27 persons were transferred from the CT workstation to a personal computer, and they were reconstructed as 3-dimensional image on the personal computer using V-works 2.0 TM . One observer obtained the 14 measurements on the reconstructed 3-dimensional image on both the CT workstation and the personal computer. Paired test was used to evaluate the intraobserver difference and the mean value of the each measurement on the CT workstation and the personal computer. Pearson correlation analysis and % imcongruence were also performed. I-Gn, N-Gn, N-A, N-Ns, B-A and G-Op did not show any statistically significant difference (p>0.05), B-O, B-N, Eu-Eu, Zy-Zy, Biw, D-D, Orbrd R, and L had statistically significant difference (p<0.05), but the mean values of the differences of all measurements were below 2 mm, except for D-D. The value of correlation coefficient γ was greater than 0.95 at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and it was 0.75 at B-O, 0.78 at D-D, and 0.82 at both Orbrb R and L. The % incongruence was below 4% at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and 7.18%, 10.78%, 4.97%, 5.89% at B-O, D-D, Orbrb R and L respectively. It can be considered that the utilization of the personal computer has great usefulness in reconstruction of the 3-dimensional image when it comes to the economics, accessibility and convenience, except for thin bones and the landmarks which and difficult to be located

  1. PAW [Physics Analysis Workstation] at Fermilab: CORE based graphics implementation of HIGZ [High Level Interface to Graphics and Zebra

    International Nuclear Information System (INIS)

    Johnstad, H.

    1989-06-01

    The Physics Analysis Workstation system (PAW) is primarily intended to be the last link in the analysis chain of experimental data. The graphical part of PAW is based on HIGZ (High Level Interface to Graphics and Zebra), which is based on the OSI and ANSI standard Graphics Kernel System (GKS). HIGZ is written in the context of PAW. At Fermilab, the CORE based graphics system DI-3000 by Precision Visuals Inc., is widely used in the analysis of experimental data. The graphical part of the PAW routines has been totally rewritten and implemented in the Fermilab environment. 3 refs

  2. Parallelized implicit propagators for the finite-difference Schrödinger equation

    Science.gov (United States)

    Parker, Jonathan; Taylor, K. T.

    1995-08-01

    We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.

  3. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Eberhard, Peter [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: eberhard@ltm.uni-erlangen.de; Dignath, Florian [University of Stuttgart, Institute B of Mechanics (Germany)], E-mail: fd@mechb.uni-stuttgart.de; Kuebler, Lars [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: kuebler@ltm.uni-erlangen.de

    2003-03-15

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized.

  4. GROMACS 4.5: A high-throughput and highly parallel open source molecular simulation toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Pronk, Sander [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Pall, Szilard [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Schulz, Roland [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Larsson, Per [Univ. of Virginia, Charlottesville, VA (United States); Bjelkmar, Par [Science for Life Lab., Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden); Apostolov, Rossen [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Shirts, Michael R. [Univ. of Virginia, Charlottesville, VA (United States); Smith, Jeremy C. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kasson, Peter M. [Univ. of Virginia, Charlottesville, VA (United States); van der Spoel, David [Science for Life Lab., Stockholm (Sweden); Uppsala Univ., Uppsala (Sweden); Hess, Berk [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Lindahl, Erik [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden)

    2013-02-13

    In this study, molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. As a result, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations.

  5. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    International Nuclear Information System (INIS)

    Eberhard, Peter; Dignath, Florian; Kuebler, Lars

    2003-01-01

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized

  6. A Combined MPI-CUDA Parallel Solution of Linear and Nonlinear Poisson-Boltzmann Equation

    Directory of Open Access Journals (Sweden)

    José Colmenares

    2014-01-01

    Full Text Available The Poisson-Boltzmann equation models the electrostatic potential generated by fixed charges on a polarizable solute immersed in an ionic solution. This approach is often used in computational structural biology to estimate the electrostatic energetic component of the assembly of molecular biological systems. In the last decades, the amount of data concerning proteins and other biological macromolecules has remarkably increased. To fruitfully exploit these data, a huge computational power is needed as well as software tools capable of exploiting it. It is therefore necessary to move towards high performance computing and to develop proper parallel implementations of already existing and of novel algorithms. Nowadays, workstations can provide an amazing computational power: up to 10 TFLOPS on a single machine equipped with multiple CPUs and accelerators such as Intel Xeon Phi or GPU devices. The actual obstacle to the full exploitation of modern heterogeneous resources is efficient parallel coding and porting of software on such architectures. In this paper, we propose the implementation of a full Poisson-Boltzmann solver based on a finite-difference scheme using different and combined parallel schemes and in particular a mixed MPI-CUDA implementation. Results show great speedups when using the two schemes, achieving an 18.9x speedup using three GPUs.

  7. Decomposition and parallelization strategies for solving large-scale MDO problems

    Energy Technology Data Exchange (ETDEWEB)

    Grauer, M.; Eschenauer, H.A. [Research Center for Multidisciplinary Analyses and Applied Structural Optimization, FOMAAS, Univ. of Siegen (Germany)

    2007-07-01

    During previous years, structural optimization has been recognized as a useful tool within the discriptiones of engineering and economics. However, the optimization of large-scale systems or structures is impeded by an immense solution effort. This was the reason to start a joint research and development (R and D) project between the Institute of Mechanics and Control Engineering and the Information and Decision Sciences Institute within the Research Center for Multidisciplinary Analyses and Applied Structural Optimization (FOMAAS) on cluster computing for parallel and distributed solution of multidisciplinary optimization (MDO) problems based on the OpTiX-Workbench. Here the focus of attention will be put on coarsegrained parallelization and its implementation on clusters of workstations. A further point of emphasis was laid on the development of a parallel decomposition strategy called PARDEC, for the solution of very complex optimization problems which cannot be solved efficiently by sequential integrated optimization. The use of the OptiX-Workbench together with the FEM ground water simulation system FEFLOW is shown for a special water management problem. (orig.)

  8. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit.

    Science.gov (United States)

    Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R; Smith, Jeremy C; Kasson, Peter M; van der Spoel, David; Hess, Berk; Lindahl, Erik

    2013-04-01

    Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. GROMACS is an open source and free software available from http://www.gromacs.org. Supplementary data are available at Bioinformatics online.

  9. A parallel 3D particle-in-cell code with dynamic load balancing

    International Nuclear Information System (INIS)

    Wolfheimer, Felix; Gjonaj, Erion; Weiland, Thomas

    2006-01-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated

  10. A parallel 3D particle-in-cell code with dynamic load balancing

    Energy Technology Data Exchange (ETDEWEB)

    Wolfheimer, Felix [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)]. E-mail: wolfheimer@temf.de; Gjonaj, Erion [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany); Weiland, Thomas [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)

    2006-03-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated.

  11. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes

    Energy Technology Data Exchange (ETDEWEB)

    Lichtner, Peter C. [OFM Research, Redmond, WA (United States); Hammond, Glenn E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Chuan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bisht, Gautam [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, Benjamin [National Center for Atmospheric Research, Boulder, CO (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Richard [Intel Corporation, Portland, OR (United States); Univ. of Tennessee, Knoxville, TN (United States); Kumar, Jitendra [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-20

    PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 232 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.

  12. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  13. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  14. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  15. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  16. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  17. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  18. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  19. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  20. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the