WorldWideScience

Sample records for beowulf parallel workstation

  1. Implementing parallel elliptic solver on a Beowulf cluster

    Directory of Open Access Journals (Sweden)

    Marcin Paprzycki

    1999-12-01

    Full Text Available In a recent paper cite{zara} a parallel direct solver for the linear systems arising from elliptic partial differential equations has been proposed. The aim of this note is to present the initial evaluation of the performance characteristics of this algorithm on Beowulf-type cluster. In this context the performance of PVM and MPI based implementations is compared.

  2. CTEx Beowulf cluster for MCNP performance

    International Nuclear Information System (INIS)

    Gonzaga, Roberto N.; Amorim, Aneuri S. de; Balthar, Mario Cesar V.

    2011-01-01

    This work is an introduction to the CTEx Nuclear Defense Department's Beowulf Cluster. Building a Beowulf Cluster is a complex learning process that greatly depends upon your hardware and software requirements. The feasibility and efficiency of performing MCNP5 calculations with a small, heterogeneous computing cluster built in Red Hat's Fedora Linux operating system personal computers (PC) are explored. The performance increases that may be expected with such clusters are estimated for cases that typify general radiation transport calculations. Our results show that the speed increase from additional slave PCs is nearly linear up to 10 processors. The pre compiled parallel binary version of MCNP uses the Message-Passing Interface (MPI) protocol. The use of this pre compiled parallel version of MCNP5 with the MPI protocol on a small, heterogeneous computing cluster built from Red Hat's Fedora Linux operating system PCs is the subject of this work. (author)

  3. Processing large remote sensing image data sets on Beowulf clusters

    Science.gov (United States)

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  4. Implementation of a cluster Beowulf

    International Nuclear Information System (INIS)

    Victorino Guzman, Jorge Enrique

    2001-01-01

    One of the simulation systems that put a great stress on computational resources and performance are the climatic models, with a high cost of implementation, making difficult its acquisition. An alternative that offers good performance at a reasonable cost is the construction of Cluster Beowulf that allows to emulate the behaviour of a computer with several processors. In the present article we discuss the requirements of hardware for the construction of the Cluster Beowulf, the software resources for the implementation of the model CCM3.6 and the performance of the Cluster Beowulf, of the Group of Investigation in Meteorology at the National University of Colombia, with different number of processors

  5. The Beowulf manuscript reconsidered: Reading Beowulf in late Anglo-Saxon England

    Directory of Open Access Journals (Sweden)

    L. Viljoen

    2003-08-01

    Full Text Available This article defines a hypothetical late Anglo-Saxon audience: a multi-layered Christian community with competing ideologies, dialects and mythologies. It discusses how that audience might have received the Anglo-Saxon poem Beowulf. The immediate textual context of the poem constitutes an intertextual microcosm for Beowulf. The five texts in the codex provide interesting clues to the common concerns, conflicts and interests of its audience. The organizing principle for the grouping of this disparate mixture of Christian and secular texts with Beowulf was not a sense of canonicity or the collating of monuments with an aesthetic autonomy from cultural conditions or social production. They were part of the so-called “popular culture” and provide one key to the “meanings” that interested the late Anglo-Saxon audience, who would delight in the poet=s alliteration, rhythms, word-play, irony and understatement, descriptions, aphorisms and evocation of loss and transience. The poem provided cultural, historical and spiritual data and evoked a debate about pertinent moral issues. The monsters, for instance, are symbolic of problems of identity construction and establish a polarity between “us” and the “Other”, but at the same time question such binary thinking. Finally, the poem works towards an audience identity whose values emerge from the struggle within the poem and therefore also encompass the monstrous, the potentially disruptive, the darkness within B that which the poem attempts to repress.

  6. DSN Beowulf Cluster-Based VLBI Correlator

    Science.gov (United States)

    Rogstad, Stephen P.; Jongeling, Andre P.; Finley, Susan G.; White, Leslie A.; Lanyi, Gabor E.; Clark, John E.; Goodhart, Charles E.

    2009-01-01

    The NASA Deep Space Network (DSN) requires a broadband VLBI (very long baseline interferometry) correlator to process data routinely taken as part of the VLBI source Catalogue Maintenance and Enhancement task (CAT M&E) and the Time and Earth Motion Precision Observations task (TEMPO). The data provided by these measurements are a crucial ingredient in the formation of precision deep-space navigation models. In addition, a VLBI correlator is needed to provide support for other VLBI related activities for both internal and external customers. The JPL VLBI Correlator (JVC) was designed, developed, and delivered to the DSN as a successor to the legacy Block II Correlator. The JVC is a full-capability VLBI correlator that uses software processes running on multiple computers to cross-correlate two-antenna broadband noise data. Components of this new system (see Figure 1) consist of Linux PCs integrated into a Beowulf Cluster, an existing Mark5 data storage system, a RAID array, an existing software correlator package (SoftC) originally developed for Delta DOR Navigation processing, and various custom- developed software processes and scripts. Parallel processing on the JVC is achieved by assigning slave nodes of the Beowulf cluster to process separate scans in parallel until all scans have been processed. Due to the single stream sequential playback of the Mark5 data, some ramp-up time is required before all nodes can have access to required scan data. Core functions of each processing step are accomplished using optimized C programs. The coordination and execution of these programs across the cluster is accomplished using Pearl scripts, PostgreSQL commands, and a handful of miscellaneous system utilities. Mark5 data modules are loaded on Mark5 Data systems playback units, one per station. Data processing is started when the operator scans the Mark5 systems and runs a script that reads various configuration files and then creates an experiment-dependent status database

  7. Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster

    International Nuclear Information System (INIS)

    Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.

    2005-01-01

    Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)

  8. Parallel Computation of Unsteady Flows on a Network of Workstations

    Science.gov (United States)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  9. Efficient Parallel Engineering Computing on Linux Workstations

    Science.gov (United States)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  10. Climate Ocean Modeling on a Beowulf Class System

    Science.gov (United States)

    Cheng, B. N.; Chao, Y.; Wang, P.; Bondarenko, M.

    2000-01-01

    With the growing power and shrinking cost of personal computers. the availability of fast ethernet interconnections, and public domain software packages, it is now possible to combine them to build desktop parallel computers (named Beowulf or PC clusters) at a fraction of what it would cost to buy systems of comparable power front supercomputer companies. This led as to build and assemble our own sys tem. specifically for climate ocean modeling. In this article, we present our experience with such a system, discuss its network performance, and provide some performance comparison data with both HP SPP2000 and Cray T3E for an ocean Model used in present-day oceanographic research.

  11. Accelerated 3D-OSEM image reconstruction using a Beowulf PC cluster for pinhole SPECT

    International Nuclear Information System (INIS)

    Zeniya, Tsutomu; Watabe, Hiroshi; Sohlberg, Antti; Iida, Hidehiro

    2007-01-01

    A conventional pinhole single-photon emission computed tomography (SPECT) with a single circular orbit has limitations associated with non-uniform spatial resolution or axial blurring. Recently, we demonstrated that three-dimensional (3D) images with uniform spatial resolution and no blurring can be obtained by complete data acquired using two-circular orbit, combined with the 3D ordered subsets expectation maximization (OSEM) reconstruction method. However, a long computation time is required to obtain the reconstruction image, because of the fact that 3D-OSEM is an iterative method and two-orbit acquisition doubles the size of the projection data. To reduce the long reconstruction time, we parallelized the two-orbit pinhole 3D-OSEM reconstruction process by using a Beowulf personal computer (PC) cluster. The Beowulf PC cluster consists of seven PCs connected to Gbit Ethernet switches. Message passing interface protocol was utilized for parallelizing the reconstruction process. The projection data in a subset are distributed to each PC. The partial image forward-and back-projected in each PC is transferred to all PCs. The current image estimate on each PC is updated after summing the partial images. The performance of parallelization on the PC cluster was evaluated using two independent projection data sets acquired by a pinhole SPECT system with two different circular orbits. Parallelization using the PC cluster improved the reconstruction time with increasing number of PCs. The reconstruction time of 54 min by the single PC was decreased to 10 min when six or seven PCs were used. The speed-up factor was 5.4. The reconstruction image by the PC cluster was virtually identical with that by the single PC. Parallelization of 3D-OSEM reconstruction for pinhole SPECT using the PC cluster can significantly reduce the computation time, whereas its implementation is simple and inexpensive. (author)

  12. A parallel solution to the cutting stock problem for a cluster of workstations

    Energy Technology Data Exchange (ETDEWEB)

    Nicklas, L.D.; Atkins, R.W.; Setia, S.V.; Wang, P.Y. [George Mason Univ., Fairfax, VA (United States)

    1996-12-31

    This paper describes the design and implementation of a solution to the constrained 2-D cutting stock problem on a cluster of workstations. The constrained 2-D cutting stock problem is an irregular problem with a dynamically modified global data set and irregular amounts and patterns of communication. A replicated data structure is used for the parallel solution since the ratio of reads to writes is known to be large. Mutual exclusion and consistency are maintained using a token-based lazy consistency mechanism, and a randomized protocol for dynamically balancing the distributed work queue is employed. Speedups are reported for three benchmark problems executed on a cluster of workstations interconnected by a 10 Mbps Ethernet.

  13. "Beowulf" : Hollywood seikleb pimedas keskajas / Riho Laurisaar

    Index Scriptorium Estoniae

    Laurisaar, Riho

    2007-01-01

    Anglosaksi eeposest "Beowulf", mille alusel on valminud USA seiklusfilm, millest suur osa on loodud arvutigraafika abil ( stsenarist Neil Gaiman, režissöör Robert Zemeckis, osades Anthony Hopkins, Angelina Jolie, Ray Winstone)

  14. A Contemporary Voice Revisits the Past: Seamus Heaney’s Beowulf

    Directory of Open Access Journals (Sweden)

    Silvia Geremia

    2007-03-01

    Full Text Available Heaney’s controversial translation of Beowulf shows characteristics that make it look like an original work: in particular, the presence of Hiberno-English words and some unexpected structural features such as the use of italics, notes and running titles. Some of Heaney’s artistic choices have been brought into question by the Germanic philologists, who reproached him with his lack of fidelity to the original text. Moreover, the insertion of Hiberno-English words, which cause an effect of estrangement on Standard English speakers, was considered by some critics not only an aesthetic choice but a provocative act, a linguistic and political claim recalling the ancient antagonism between the Irish and the English. Yet, from the point of view of Heaney’s theoretical and cultural background, his innovations in his translation of Beowulf appear consistent with his personal notions of poetry and translation. Therefore, his Beowulf can be considered the result of a necessary interaction between translator and original text and be acclaimed in spite of all the criticism.

  15. Multi-objective optimization algorithms for mixed model assembly line balancing problem with parallel workstations

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2016-12-01

    Full Text Available This paper deals with mixed model assembly line (MMAL balancing problem of type-I. In MMALs several products are made on an assembly line while the similarity of these products is so high. As a result, it is possible to assemble several types of products simultaneously without any additional setup times. The problem has some particular features such as parallel workstations and precedence constraints in dynamic periods in which each period also effects on its next period. The research intends to reduce the number of workstations and maximize the workload smoothness between workstations. Dynamic periods are used to determine all variables in different periods to achieve efficient solutions. A non-dominated sorting genetic algorithm (NSGA-II and multi-objective particle swarm optimization (MOPSO are used to solve the problem. The proposed model is validated with GAMS software for small size problem and the performance of the foregoing algorithms is compared with each other based on some comparison metrics. The NSGA-II outperforms MOPSO with respect to some comparison metrics used in this paper, but in other metrics MOPSO is better than NSGA-II. Finally, conclusion and future research is provided.

  16. The Roots of Beowulf

    Science.gov (United States)

    Fischer, James R.

    2014-01-01

    The first Beowulf Linux commodity cluster was constructed at NASA's Goddard Space Flight Center in 1994 and its origins are a part of the folklore of high-end computing. In fact, the conditions within Goddard that brought the idea into being were shaped by rich historical roots, strategic pressures brought on by the ramp up of the Federal High-Performance Computing and Communications Program, growth of the open software movement, microprocessor performance trends, and the vision of key technologists. This multifaceted story is told here for the first time from the point of view of NASA project management.

  17. Saving the “Undoomed Man” In Beowulf (572b-573

    Directory of Open Access Journals (Sweden)

    Anderson Salena Sampson

    2015-01-01

    Full Text Available The maxim Wyrd oft nereð // unfӕgne eorl, / þonne his ellen deah “Fate often spares an undoomed man when his courage avails” (Beowulf 572b-573 has been likened to “Fortune favors the brave,” with little attention to the word unfӕgne, which is often translated “undoomed”. This comparison between proverbs emphasizes personal agency and suggests a contrast between the proverb in 572b-573 and the maxim Gӕð a wyrd swa hio scel “Goes always fate as it must” (Beowulf 455b, which depicts an inexorable wyrd. This paper presents the history of this view and argues that linguistic analysis and further attention to Germanic cognates of (unfӕge reveal a proverb that harmonizes with 455b. (Unfӕge and its cognates have meanings related to being brave or cowardly, blessed or accursed, and doomed or undoomed. A similar Old Norse proverb also speaks to the significance of the status of unfӕge men. Furthermore, the prenominal position of unfӕgne is argued to represent a characterizing property of the man. The word unfӕgne is essential to the meaning of this proverb as it indicates not the simple absence of being doomed but the presence of a more complex quality. This interpretive point is significant in that it provides more information about the portrayal of wyrd in Beowulf by clarifying a well-known proverb in the text; it also has implications for future translations of these verses.

  18. ANL statement of site strategy for computing workstations

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R. (ed.); Boxberger, L.M.; Amiot, L.W.; Bretscher, M.E.; Engert, D.E.; Moszur, F.M.; Mueller, C.J.; O' Brien, D.E.; Schlesselman, C.G.; Troyer, L.J.

    1991-11-01

    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is to develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.

  19. Incontrare Grendel al cinema. Riscrivere il Beowulf in un altro luogo e in un altro tempo

    Directory of Open Access Journals (Sweden)

    Francesco Giusti

    2011-05-01

    Full Text Available As an epic poem Beowulf is a literary space of encounters, but in comparison to the Classical models, the Iliad and the Odyssey, it does not require the extraneousness of the place where the meeting or the clash happens. The threat is at the door. The Other, the monstrous is close, very close to the human community and is genetically tied with it. This is a point of particular interest on which the Twentieth century attempts to rewrite the Anglo-Saxon poem focus so as to create new possibilities and patterns for the clash, and  to investigate the limits of the human. The paper focuses on two recent movies: Beowulf & Grendel (Gunnarsson, Iceland 2005 e Beowulf (Zemeckis, USA 2007. These movies, notwithstanding the differences in technique, genre and intention, clearly  show some shared trends in the background, beyond the more general influence of the former on the latter. For they both re-read  the old poem as the story of the infraction of a boundary and the cultural encounter between the human community, championed by Beowulf, and that Otherness represented by the monstrous Grendel. If the religious aspect, privileged by the medieval narrator, blurs in the movies, they bring to the surface some inner fears which are latent in the poem and are tied to two dangerous spaces of possible intersection and intermingling: the ethno-anthropological and the psycho-genetic aspects of human life and story. Two fears that belong definitely to contemporary men more than to the medieval world.

  20. UWGSP6: a diagnostic radiology workstation of the future

    Science.gov (United States)

    Milton, Stuart W.; Han, Sang; Choi, Hyung-Sik; Kim, Yongmin

    1993-06-01

    The Univ. of Washington's Image Computing Systems Lab. (ICSL) has been involved in research into the development of a series of PACS workstations since the middle 1980's. The most recent research, a joint UW-IBM project, attempted to create a diagnostic radiology workstation using an IBM RISC System 6000 (RS6000) computer workstation and the X-Window system. While the results are encouraging, there are inherent limitations in the workstation hardware which prevent it from providing an acceptable level of functionality for diagnostic radiology. Realizing the RS6000 workstation's limitations, a parallel effort was initiated to design a workstation, UWGSP6 (Univ. of Washington Graphics System Processor #6), that provides the required functionality. This paper documents the design of UWGSP6, which not only addresses the requirements for a diagnostic radiology workstation in terms of display resolution, response time, etc., but also includes the processing performance necessary to support key functions needed in the implementation of algorithms for computer-aided diagnosis. The paper includes a description of the workstation architecture, and specifically its image processing subsystem. Verification of the design through hardware simulation is then discussed, and finally, performance of selected algorithms based on detailed simulation is provided.

  1. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  2. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  3. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  4. A Massively Parallel Code for Polarization Calculations

    Science.gov (United States)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  5. [PVFS 2000: An operational parallel file system for Beowulf

    Science.gov (United States)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  6. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  7. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  8. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  9. VMware workstation

    CERN Document Server

    van Vugt, Sander

    2013-01-01

    This book is a practical, step-by-step guide to creating and managing virtual machines using VMware Workstation.VMware Workstation: No Experience Necessary is for developers as well as system administrators who want to efficiently set up a test environment .You should have basic networking knowledge, and prior experience with Virtual Machines and VMware Player would be beneficial

  10. Parallelization of ITOUGH2 using PVM

    International Nuclear Information System (INIS)

    Finsterle, Stefan

    1998-01-01

    ITOUGH2 inversions are computationally intensive because the forward problem must be solved many times to evaluate the objective function for different parameter combinations or to numerically calculate sensitivity coefficients. Most of these forward runs are independent from each other and can therefore be performed in parallel. Message passing based on the Parallel Virtual Machine (PVM) system has been implemented into ITOUGH2 to enable parallel processing of ITOUGH2 jobs on a heterogeneous network of Unix workstations. This report describes the PVM system and its implementation into ITOUGH2. Instructions are given for installing PVM, compiling ITOUGH2-PVM for use on a workstation cluster, the preparation of an 1.TOUGH2 input file under PVM, and the execution of an ITOUGH2-PVM application. Examples are discussed, demonstrating the use of ITOUGH2-PVM

  11. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  12. Parallel Evolutionary Optimization for Neuromorphic Network Training

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Disney, Adam [University of Tennessee (UT); Singh, Susheela [North Carolina State University (NCSU), Raleigh; Bruer, Grant [University of Tennessee (UT); Mitchell, John Parker [University of Tennessee (UT); Klibisz, Aleksander [University of Tennessee (UT); Plank, James [University of Tennessee (UT)

    2016-01-01

    One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impact the performance of EO on Titan, the U.S.'s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pi's. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.

  13. Speed up of MCACE, a Monte Carlo code for evaluation of shielding safety, by parallel computer, (3)

    International Nuclear Information System (INIS)

    Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka; Onodera, Emi; Imawaka, Tsuneyuki; Yoda, Yoshihisa.

    1993-07-01

    The parallel computing of the MCACE code has been studied on two platforms; 1) Shared Memory Type Vector-Parallel Computer Monte-4 and 2) Networked Several Workstations. On the Monte-4, a disk-file has been allocated to collect all results computed by 4 CPUs in parallel, executing the copy of the MCACE code on each CPU. On the workstations under network environment, two parallel models have been evaluated; 1) a host-node model and 2) the model used on the Monte-4 where no software for parallelization has been employed but only standard FORTRAN language. The measurement of computing times has showed that speed up of about 3 times has been achieved by using 4 CPUs of the Monte-4. Further, connecting 4 workstations by network, the computing speed by parallelization has achieved faster than our scalar main frame computer, FACOM M-780. (author)

  14. NET remote workstation

    International Nuclear Information System (INIS)

    Leinemann, K.

    1990-10-01

    The goal of this NET study was to define the functionality of a remote handling workstation and its hardware and software architecture. The remote handling workstation has to fulfill two basic functions: (1) to provide the man-machine interface (MMI), that means the interface to the control system of the maintenance equipment and to the working environment (telepresence) and (2) to provide high level (task level) supporting functions (software tools) during the maintenance work and in the preparation phase. Concerning the man-machine interface, an important module of the remote handling workstation besides the standard components of man-machine interfacing is a module for graphical scene presentation supplementing viewing by TV. The technique of integrated viewing is well known from JET BOOM and TARM control using the GBsim and KISMET software. For integration of equipment dependent MMI functions the remote handling workstation provides a special software module interface. Task level support of the operator is based on (1) spatial (geometric/kinematic) models, (2) remote handling procedure models, and (3) functional models of the equipment. These models and the related simulation modules are used for planning, programming, execution monitoring, and training. The workstation provides an intelligent handbook guiding the operator through planned procedures illustrated by animated graphical sequences. For unplanned situations decision aids are available. A central point of the architectural design was to guarantee a high flexibility with respect to hardware and software. Therefore the remote handling workstation is designed as an open system based on widely accepted standards allowing the stepwise integration of the various modules starting with the basic MMI and the spatial simulation as standard components. (orig./HP) [de

  15. Iterative solution of general sparse linear systems on clusters of workstations

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Gen-Ching; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.

  16. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  17. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  18. Engineering workstation: Sensor modeling

    Science.gov (United States)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  19. Workstations studies and radiation protection

    International Nuclear Information System (INIS)

    Lahaye, T.; Donadille, L.; Rehel, J.L.; Paquet, F.; Beneli, C.; Cordoliani, Y.S.; Vrigneaud, J.M.; Gauron, C.; Petrequin, A.; Frison, D.; Jeannin, B.; Charles, D.; Carballeda, G.; Crouail, P.; Valot, C.

    2006-01-01

    This day on the workstations studies for the workers follow-up, was organised by the research and health section. Devoted to the company doctors, for the competent persons in radiation protection, for the engineers of safety, it presented examples of methodologies and applications in the medical, industrial domain and the research, so contributing to a better understanding and an application of regulatory measures. The analysis of the workstation has to allow a reduction of the exposures and the risks and lead to the optimization of the medical follow-up. The agenda of this day included the different subjects as follow: evolution of the regulation in matter of demarcation of the regulated zones where the measures of workers protection are strengthened; presentation of the I.R.S.N. guide of help to the realization of a workstation study; implementation of a workstation study: case of radiology; the workstation studies in the research area; Is it necessary to impose the operational dosimetry in the services of radiodiagnostic? The experience feedback of a competent person in radiation protection (P.C.R.) in a hospital environment; radiation protection: elaboration of a good practices guide in medical field; the activities file in nuclear power plant: an evaluation tool of risks for the prevention. Methodological presentation and examples; insulated workstation study; the experience feedback of a provider; Contribution of the ergonomics to the determiners characterization in the ionizing radiation exposure situations;The workstations studies for the internal contamination in the fuel cycle facilities and the consideration of the results in the medical follow-up; R.E.L.I.R. necessity of workstation studies; the consideration of the human factor. (N.C.)

  20. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    Full Text Available Abstract Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs have more memory bandwidth and computational capability than Central Processing Units (CPUs and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective

  1. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    Science.gov (United States)

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other

  2. Parallelization of MCNP4 code by using simple FORTRAN algorithms

    International Nuclear Information System (INIS)

    Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.

    1993-12-01

    Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)

  3. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  4. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  5. Performance assessment of advanced engineering workstations for fuel management applications

    International Nuclear Information System (INIS)

    Turinsky, P.J.

    1989-07-01

    The purpose of this project was to assess the performance of an advanced engineering workstation [AEW] with regard to applications to incore fuel management for LWRs. The attributes of most interest to us that define an AEW are parallel computational hardware and graphics capabilities. The AEWs employed were super microcomputers manufactured by MASSCOMP, Inc. These computers utilize a 32-bit architecture, graphics co-processor, multi-CPUs [up to six] attached to common memory and multi-vector accelerators. 7 refs., 33 figs., 4 tabs

  6. Control of a pulse height analyzer using an RDX workstation

    International Nuclear Information System (INIS)

    Montelongo, S.; Hunt, D.N.

    1984-12-01

    The Nuclear Chemistry Division of Lawrence Livermore National laboratory is in the midst of upgrading its radiation counting facilities to automate data acquisition and quality control. This upgrade requires control of a pulse height analyzer (PHA) from an interactive LSI-11/23 workstation running RSX-11M. The PHA is a micro-computer based multichannel analyzer system providing data acquisition, storage, display, manipulation and input/output from up to four independent acquisition interfaces. Control of the analyzer includes reading and writing energy spectra, issuing commands, and servicing device interrupts. The analyzer communicates to the host system over a 9600-baud serial line using the Digital Data Communications link level Protocol (DDCMP). We relieved the RSX workstation CPU from the DDCMP overhead by implementing a DEC compatible in-house designed DMA serial line board (the ISL-11) to communicate with the analyzer. An RSX I/O device driver was written to complete the path between the analyzer and the RSX system by providing the link between the communication board and an application task. The I/O driver is written to handle several ISL-11 cards all operating in parallel thus providing support for control of multiple analyzers from a single workstation. The RSX device driver, its design and use by application code controlling the analyzer, and its operating environment will be discussed

  7. Parallel solution of the time-dependent Ginzburg-Landau equations and other experiences using BlockComm-Chameleon and PCN on the IBM SP, Intel iPSC/860, and clusters of workstations

    International Nuclear Information System (INIS)

    Coskun, E.

    1995-09-01

    Time-dependent Ginzburg-Landau (TDGL) equations are considered for modeling a thin-film finite size superconductor placed under magnetic field. The problem then leads to the use of so-called natural boundary conditions. Computational domain is partitioned into subdomains and bond variables are used in obtaining the corresponding discrete system of equations. An efficient time-differencing method based on the Forward Euler method is developed. Finally, a variable strength magnetic field resulting in a vortex motion in Type II High T c superconducting films is introduced. The authors tackled the problem using two different state-of-the-art parallel computing tools: BlockComm/Chameleon and PCN. They had access to two high-performance distributed memory supercomputers: the Intel iPSC/860 and IBM SP1. They also tested the codes using, as a parallel computing environment, a cluster of Sun Sparc workstations

  8. From parallel to distributed computing for reactive scattering calculations

    International Nuclear Information System (INIS)

    Lagana, A.; Gervasi, O.; Baraglia, R.

    1994-01-01

    Some reactive scattering codes have been ported on different innovative computer architectures ranging from massively parallel machines to clustered workstations. The porting has required a drastic restructuring of the codes to single out computationally decoupled cpu intensive subsections. The suitability of different theoretical approaches for parallel and distributed computing restructuring is discussed and the efficiency of related algorithms evaluated

  9. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  10. Advanced Satellite Workstation - An integrated workstation environment for operational support of satellite system planning and analysis

    Science.gov (United States)

    Hamilton, Marvin J.; Sutton, Stewart A.

    A prototype integrated environment, the Advanced Satellite Workstation (ASW), which was developed and delivered for evaluation and operator feedback in an operational satellite control center, is described. The current ASW hardware consists of a Sun Workstation and Macintosh II Workstation connected via an ethernet Network Hardware and Software, Laser Disk System, Optical Storage System, and Telemetry Data File Interface. The central objective of ASW is to provide an intelligent decision support and training environment for operator/analysis of complex systems such as satellites. Compared to the many recent workstation implementations that incorporate graphical telemetry displays and expert systems, ASW provides a considerably broader look at intelligent, integrated environments for decision support, based on the premise that the central features of such an environment are intelligent data access and integrated toolsets.

  11. VAX Professional Workstation goes graphic

    International Nuclear Information System (INIS)

    Downward, J.G.

    1984-01-01

    The VAX Professional Workstation (VPW) is a collection of programs and procedures designed to provide an integrated work-station environment for the staff at KMS Fusion's research laboratories. During the past year numerous capabilities have been added to VPW, including support for VT125/VT240/4014 graphic workstations, editing windows, and additional desk utilities. Graphics workstation support allows users to create, edit, and modify graph data files, enter the data via a graphic tablet, create simple plots with DATATRIEVE or DECgraph on ReGIS terminals, or elaborate plots with TEKGRAPH on ReGIS or Tektronix terminals. Users may assign display error bars to the data and interactively plot it in a variety of ways. Users also can create and display viewgraphs. Hard copy output for a large network of office terminals is obtained by multiplexing each terminal's video output into a recently developed video multiplexer front ending a single channel video hard copy unit

  12. Performance of the coupled thermalhydraulics/neutron kinetics code R/P/C on workstation clusters and multiprocessor systems

    International Nuclear Information System (INIS)

    Hammer, C.; Paffrath, M.; Boeer, R.; Finnemann, H.; Jackson, C.J.

    1996-01-01

    The light water reactor core simulation code PANBOX has been coupled with the transient analysis code RELAP5 for the purpose of performing plant safety analyses with a three-dimensional (3-D) neutron kinetics model. The system has been parallelized to improve the computational efficiency. The paper describes the features of this system with emphasis on performance aspects. Performance results are given for different types of parallelization, i. e. for using an automatic parallelizing compiler, using the portable PVM platform on a workstation cluster, using PVM on a shared memory multiprocessor, and for using machine dependent interfaces. (author)

  13. "Leodum Lidost on Lofgeornost". La poesía épica de "Beowulf" en nuevos formatos gráficos y visuales

    OpenAIRE

    Bueno Alonso, Jorge Luis

    2007-01-01

    The new formats we have nowadays for the transmission of knowledge are heavily modifying our relationship with the products of our culture. They al so give us new possibilities to deal with literary texts, which constitutes a very important step in the transmission of medieval literature through popular culture. In its age, Beowulf entertained the audience of the meadhall. lt was the best-seller of the day, the successful potboiler movie of Anglo-Saxon England. In our time its story of men an...

  14. Parallelising a molecular dynamics algorithm on a multi-processor workstation

    Science.gov (United States)

    Müller-Plathe, Florian

    1990-12-01

    The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.

  15. [PACS-based endoscope image acquisition workstation].

    Science.gov (United States)

    Liu, J B; Zhuang, T G

    2001-01-01

    A practical PACS-based Endoscope Image Acquisition Workstation is here introduced. By a Multimedia Video Card, the endoscope video is digitized and captured dynamically or statically into computer. This workstation realizes a variety of functions such as the endoscope video's acquisition and display, as well as the editing, processing, managing, storage, printing, communication of related information. Together with other medical image workstation, it can make up the image sources of PACS for hospitals. In addition, it can also act as an independent endoscopy diagnostic system.

  16. Parallelization of simulation code for liquid-gas model of lattice-gas fluid

    International Nuclear Information System (INIS)

    Kawai, Wataru; Ebihara, Kenichi; Kume, Etsuo; Watanabe, Tadashi

    2000-03-01

    A simulation code for hydrodynamical phenomena which is based on the liquid-gas model of lattice-gas fluid is parallelized by using MPI (Message Passing Interface) library. The parallelized code can be applied to the larger size of the simulations than the non-parallelized code. The calculation times of the parallelized code on VPP500 (Vector-Parallel super computer with dispersed memory units), AP3000 (Scalar-parallel server with dispersed memory units), and a workstation cluster decreased in inverse proportion to the number of processors. (author)

  17. Office ergonomics: deficiencies in computer workstation design.

    Science.gov (United States)

    Shikdar, Ashraf A; Al-Kindi, Mahmoud A

    2007-01-01

    The objective of this research was to study and identify ergonomic deficiencies in computer workstation design in typical offices. Physical measurements and a questionnaire were used to study 40 workstations. Major ergonomic deficiencies were found in physical design and layout of the workstations, employee postures, work practices, and training. The consequences in terms of user health and other problems were significant. Forty-five percent of the employees used nonadjustable chairs, 48% of computers faced windows, 90% of the employees used computers more than 4 hrs/day, 45% of the employees adopted bent and unsupported back postures, and 20% used office tables for computers. Major problems reported were eyestrain (58%), shoulder pain (45%), back pain (43%), arm pain (35%), wrist pain (30%), and neck pain (30%). These results indicated serious ergonomic deficiencies in office computer workstation design, layout, and usage. Strategies to reduce or eliminate ergonomic deficiencies in computer workstation design were suggested.

  18. Imaging workstations for computer-aided primatology: promises and pitfalls.

    Science.gov (United States)

    Vannier, M W; Conroy, G C

    1989-01-01

    In this paper, the application of biomedical imaging workstations to primatology will be explained and evaluated. The technological basis, computer hardware and software aspects, and the various uses of several types of workstations will all be discussed. The types of workstations include: (1) Simple - these display-only workstations, which function as electronic light boxes, have applications as terminals to picture archiving and communication (PAC) systems. (2) Diagnostic reporting - image-processing workstations that include the ability to perform straightforward manipulations of gray scale and raw data values will be considered for operations such as histogram equalization (whether adaptive or global), gradient edge finders, contour generation, and region of interest, as well as other related functions. (3) Manipulation systems - three-dimensional modeling and computer graphics with application to radiation therapy treatment planning, and surgical planning and evaluation will be considered. A technology of prime importance in the function of these workstations lies in communications and networking. The hierarchical organization of an electronic computer network and workstation environment with the interrelationship of simple, diagnostic reporting, and manipulation workstations to a coaxial or fiber optic network will be analyzed.

  19. The concepts and functions of a FEM workstation

    International Nuclear Information System (INIS)

    Brown, R.R.; Gloudeman, J.F.

    1982-01-01

    Recent advances in microprocessor-based computer hardware and associated software provide a basis for the development of a FEM workstation. The key requirements for such a workstation are reviewed and the recent hardware and software developments are discussed that make such a workstation both technically and economically feasible at this time. (orig.)

  20. A RISC/UNIX workstation second stage trigger

    International Nuclear Information System (INIS)

    Foreman, W.M.; Amann, J.F.; Fu, S.; Kozlowski, T.; Naivar, F.J.; Oothoudt, M.A.; Shelley, F.

    1992-01-01

    Recent advances in Reduced Instruction Set Computer (RISC) workstations have greatly altered the economics of processing power available for experiments. In addition VME interfaces available for many of these workstations make it possible to use them in experiment frontends for filtering and compressing data. Such a second stage trigger has been implemented at LAMPF using a commercially available workstation and VME interface. The implementation is described and measurements of data transfer speeds are presented in this paper

  1. Integrated telemedicine workstation for intercontinental grand rounds

    Science.gov (United States)

    Willis, Charles E.; Leckie, Robert G.; Brink, Linda; Goeringer, Fred

    1995-04-01

    The Telemedicine Spacebridge to Moscow was a series of intercontinental sessions sponsored jointly by NASA and the Moscow Academy of Medicine. To improve the quality of medical images presented, the MDIS Project developed a workstation for acquisition, storage, and interactive display of radiology and pathology images. The workstation was based on a Macintosh IIfx platform with a laser digitizer for radiographs and video capture capability for microscope images. Images were transmitted via the Russian Lyoutch Satellite which had only a single video channel available and no high speed data channels. Two workstations were configured -- one for use at the Uniformed Services University of Health Sciences in Bethesda, MD. and the other for use at the Hospital of the Interior in Moscow, Russia. The two workstations were used may times during 16 sessions. As clinicians used the systems, we modified the original configuration to improve interactive use. This project demonstrated that numerous acquisition and output devices could be brought together in a single interactive workstation. The video images were satisfactory for remote consultation in a grand rounds format.

  2. EPRI engineering workstation software - Discussion and demonstration

    International Nuclear Information System (INIS)

    Stewart, R.P.; Peterson, C.E.; Agee, L.J.

    1992-01-01

    Computing technology is undergoing significant changes with respect to engineering applications in the electric utility industry. These changes result mainly from the introduction of several UNIX workstations that provide mainframe calculational capability at much lower costs. The workstations are being coupled with microcomputers through local area networks to provide engineering groups with a powerful and versatile analysis capability. PEGASYS, the Professional Engineering Graphic Analysis System, is a software package for use with engineering analysis codes executing in a workstation environment. PEGASYS has a menu driven, user-friendly interface to provide pre-execution support for preparing unput and graphical packages for post-execution analysis and on-line monitoring capability for engineering codes. The initial application of this software is for use with RETRAN-02 operating on an IBM RS/6000 workstation using X-Windows/UNIX and a personal computer under DOS

  3. Zoning and workstation analysis in interventional cardiology

    International Nuclear Information System (INIS)

    Degrange, J.P.

    2009-01-01

    As interventional cardiology can induce high doses not only for patients but also for the personnel, the delimitation of regulated areas (or zoning) and workstation analysis (dosimetry) are very important in terms of radioprotection. This paper briefly recalls methods and tools for the different steps to perform zoning and workstation analysis. It outlines the peculiarities of interventional cardiology, presents methods and tools adapted to interventional cardiology, and then discusses the same issues but for workstation analysis. It also outlines specific problems which can be met, and their possible adapted solutions

  4. Parallel preconditioned conjugate gradient algorithm applied to neutron diffusion problem

    International Nuclear Information System (INIS)

    Majumdar, A.; Martin, W.R.

    1992-01-01

    Numerical solution of the neutron diffusion problem requires solving a linear system of equations such as Ax = b, where A is an n x n symmetric positive definite (SPD) matrix; x and b are vectors with n components. The preconditioned conjugate gradient (PCG) algorithm is an efficient iterative method for solving such a linear system of equations. In this paper, the authors describe the implementation of a parallel PCG algorithm on a shared memory machine (BBN TC2000) and on a distributed workstation (IBM RS6000) environment created by the parallel virtual machine parallelization software

  5. Insulation coordination workstation for AC and DC substations

    International Nuclear Information System (INIS)

    Booth, R.R.; Hileman, A.R.

    1990-01-01

    The Insulation Coordination Workstation was designed to aid the substation design engineer in the insulation coordination process. The workstation utilizes state of the art computer technology to present a set of tools necessary for substation insulation coordination, and to support the decision making process for all aspects of insulation coordination. The workstation is currently being developed for personal computers supporting OS/2 Presentation Manager. Modern Computer-Aided Software Engineering (CASE) technology was utilized to create an easily expandable framework which currently consists of four modules, each accessing a central application database. The heart of the workstation is a library of user-friendly application programs for the calculation of important voltage stresses used for the evaluation of insulation coordination. The Oneline Diagram is a graphic interface for data entry into the EPRI distributed EMTP program, which allows the creation of complex systems on the CRT screen using simple mouse clicks and keyboard entries. Station shielding is graphically represented in the Geographic Viewport using a three-dimensional substation model, and the interactive plotting package allows plotting of EPRI EMTP output results on the CRT screen, printer, or pen plotter. The Insulation Coordination Workstation was designed by Advanced Systems Technology (AST), a division of ABB Power Systems, Inc., and sponsored by the Electric Power Research Institute under RP 2323-5, AC/DC Insulation Coordination Workstation

  6. The role of the mainframe terminated : mainframe versus workstation

    CERN Document Server

    Williams, D O

    1991-01-01

    I. What mainframes? - The surgeon-general has determined that you shall treat all costs with care ( continental effects, discounts assumed, next month's or last month's prices, optimism of the reporter. II. Typical mainframe hardware III. Typical mainframe software IV. What workstations? VI. Typical workstation hardware VII. Typical workstation software VIII. Titan vs PDP-7s XIX.Historic answer X. Amdahl's Law....

  7. Fast 2D FWI on a multi and many-cores workstation.

    Science.gov (United States)

    Thierry, Philippe; Donno, Daniela; Noble, Mark

    2014-05-01

    Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16

  8. Evaluating biomechanics of user-selected sitting and standing computer workstation.

    Science.gov (United States)

    Lin, Michael Y; Barbir, Ana; Dennerlein, Jack T

    2017-11-01

    A standing computer workstation has now become a popular modern work place intervention to reduce sedentary behavior at work. However, user's interaction related to a standing computer workstation and its differences with a sitting workstation need to be understood to assist in developing recommendations for use and set up. The study compared the differences in upper extremity posture and muscle activity between user-selected sitting and standing workstation setups. Twenty participants (10 females, 10 males) volunteered for the study. 3-D posture, surface electromyography, and user-reported discomfort were measured while completing simulated tasks with each participant's self-selected workstation setups. Sitting computer workstation associated with more non-neutral shoulder postures and greater shoulder muscle activity, while standing computer workstation induced greater wrist adduction angle and greater extensor carpi radialis muscle activity. Sitting computer workstation also associated with greater shoulder abduction postural variation (90th-10th percentile) while standing computer workstation associated with greater variation for should rotation and wrist extension. Users reported similar overall discomfort levels within the first 10 min of work but had more than twice as much discomfort while standing than sitting after 45 min; with most discomfort reported in the low back for standing and shoulder for sitting. These different measures provide understanding in users' different interactions with sitting and standing and by alternating between the two configurations in short bouts may be a way of changing the loading pattern on the upper extremity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Development of PSA workstation KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    Advanced Research Group of Korea Atomic Energy Research Institute has been developing the Probabilistic Safety Assessment(PSA) workstation KIRAP from 1992. This report describes the recent development activities of PSA workstation KIRAP. The first is to develop and improve the methodologies for PSA quantification, that are the incorporation of fault tree modularization technique, the improvement of cut set generation method, the development of rule-based recovery, the development of methodology to solve a fault tree which has the logical loops and to handle a fault tree which has several initiators. These methodologies are incorporated in the PSA quantification software KIRAP-CUT. The second is to convert PSA modeling softwares for Windows, which have been used on the DOS environment since 1987. The developed softwares are the fault tree editor KWTREE, the event tree editor CONPAS, and Data manager KWDBMAN for event data and common cause failure (CCF) data. With the development of PSA workstation, it makes PSA modeling and PSA quantification and automation easier and faster. (author). 8 refs.

  10. Development of PSA workstation KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    Advanced Research Group of Korea Atomic Energy Research Institute has been developing the Probabilistic Safety Assessment(PSA) workstation KIRAP from 1992. This report describes the recent development activities of PSA workstation KIRAP. The first is to develop and improve the methodologies for PSA quantification, that are the incorporation of fault tree modularization technique, the improvement of cut set generation method, the development of rule-based recovery, the development of methodology to solve a fault tree which has the logical loops and to handle a fault tree which has several initiators. These methodologies are incorporated in the PSA quantification software KIRAP-CUT. The second is to convert PSA modeling softwares for Windows, which have been used on the DOS environment since 1987. The developed softwares are the fault tree editor KWTREE, the event tree editor CONPAS, and Data manager KWDBMAN for event data and common cause failure (CCF) data. With the development of PSA workstation, it makes PSA modeling and PSA quantification and automation easier and faster. (author). 8 refs

  11. A Next Generation BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin; Tauro, Sandeep

    2011-01-01

    We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials.......We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials....

  12. Radiology workstation for mammography: preliminary observations, eyetracker studies, and design

    Science.gov (United States)

    Beard, David V.; Johnston, Richard E.; Pisano, Etta D.; Hemminger, Bradley M.; Pizer, Stephen M.

    1991-07-01

    For the last four years, the UNC FilmPlane project has focused on constructing a radiology workstation facilitating CT interpretations equivalent to those with film and viewbox. Interpretation of multiple CT studies was originally chosen because handling such large numbers of images was considered to be one of the most difficult tasks that could be performed with a workstation. The authors extend the FilmPlane design to address mammography. The high resolution and contrast demands coupled with the number of images often cross- compared make mammography a difficult challenge for the workstation designer. This paper presents the results of preliminary work with workstation interpretation of mammography. Background material is presented to justify why the authors believe electronic mammographic workstations could improve health care delivery. The results of several observation sessions and a preliminary eyetracker study of multiple-study mammography interpretations are described. Finally, tentative conclusions of what a mammographic workstation might look like and how it would meet clinical demand to be effective are presented.

  13. The Impact of Ergonomically Designed Workstations on Shoulder EMG Activity during Carpet Weaving

    Directory of Open Access Journals (Sweden)

    Majid Motamedzade

    2014-12-01

    Full Text Available Background: The present study aimed to evaluate the biomechanical exposure to the trapezius muscle activity in female weavers for a prolonged period in the workstation A (suggested by previous studies and workstation B (proposed by the present study. Methods: Electromyography data were collected from nine females during four hours for each ergonomically designed workstation at the Ergonomics Laboratory, Hamadan, Iran. The design criteria for ergonomically designed workstations were: 1 weaving height (20 and 3 cm above elbow height for workstations A and B, respectively, and 2 seat type (10° and 0° forwardsloping seat for workstations A and B, respectively. Results: The amplitude probability distribution function (APDF analysis showed that the left and right upper trapezius muscle activity was almost similar at each workstation. Trapezius muscle activity in the workstation A was significantly greater than workstations B (P<0.001. Conclusion: In general, use of workstation B leads to significantly reduced muscle activity levels in the upper trapezius as compared to workstation A in weavers. Despite the positive impact of workstation B in reducing trapezius muscle activity, it seems that constrained postures of the upper arm during weaving may be associated with musculoskeletal symptoms.

  14. A Parallel Multigrid Solver for Viscous Flows on Anisotropic Structured Grids

    Science.gov (United States)

    Prieto, Manuel; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    This paper presents an efficient parallel multigrid solver for speeding up the computation of a 3-D model that treats the flow of a viscous fluid over a flat plate. The main interest of this simulation lies in exhibiting some basic difficulties that prevent optimal multigrid efficiencies from being achieved. As the computing platform, we have used Coral, a Beowulf-class system based on Intel Pentium processors and equipped with GigaNet cLAN and switched Fast Ethernet networks. Our study not only examines the scalability of the solver but also includes a performance evaluation of Coral where the investigated solver has been used to compare several of its design choices, namely, the interconnection network (GigaNet versus switched Fast-Ethernet) and the node configuration (dual nodes versus single nodes). As a reference, the performance results have been compared with those obtained with the NAS-MG benchmark.

  15. Run-Time and Compiler Support for Programming in Adaptive Parallel Environments

    Directory of Open Access Journals (Sweden)

    Guy Edjlali

    1997-01-01

    Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.

  16. Post-deployment usability evaluation of a radiology workstation

    NARCIS (Netherlands)

    Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi A.; Oudkerk, Matthijs; Van Ooijen, Peter M. A.

    Objectives: To determine the number, nature and severity of usability issues radiologists encounter while using a commercially available radiology workstation in clinical practice, and to assess how well the results of a pre-deployment usability evaluation of this workstation generalize to clinical

  17. Parallel implementations of 2D explicit Euler solvers

    International Nuclear Information System (INIS)

    Giraud, L.; Manzini, G.

    1996-01-01

    In this work we present a subdomain partitioning strategy applied to an explicit high-resolution Euler solver. We describe the design of a portable parallel multi-domain code suitable for parallel environments. We present several implementations on a representative range of MlMD computers that include shared memory multiprocessors, distributed virtual shared memory computers, as well as networks of workstations. Computational results are given to illustrate the efficiency, the scalability, and the limitations of the different approaches. We discuss also the effect of the communication protocol on the optimal domain partitioning strategy for the distributed memory computers

  18. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  19. [Design and development of the DSA digital subtraction workstation].

    Science.gov (United States)

    Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo

    2008-05-01

    According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.

  20. Stampi: a message passing library for distributed parallel computing. User's guide

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Koide, Hiroshi; Takemiya, Hiroshi

    1998-11-01

    A new message passing library, Stampi, has been developed to realize a computation with different kind of parallel computers arbitrarily and making MPI (Message Passing Interface) as an unique interface for communication. Stampi is based on MPI2 specification. It realizes dynamic process creation to different machines and communication between spawned one within the scope of MPI semantics. Vender implemented MPI as a closed system in one parallel machine and did not support both functions; process creation and communication to external machines. Stampi supports both functions and enables us distributed parallel computing. Currently Stampi has been implemented on COMPACS (COMplex PArallel Computer System) introduced in CCSE, five parallel computers and one graphic workstation, and any communication on them can be processed on. (author)

  1. Workstations studies and radiation protection; Etudes de postes et radioprotection

    Energy Technology Data Exchange (ETDEWEB)

    Lahaye, T. [Direction des relations du travail, 75 - Paris (France); Donadille, L.; Rehel, J.L.; Paquet, F. [Institut de Radioprotection et de Surete Nucleaire, 92 - Fontenay-aux-Roses (France); Beneli, C. [Paris-5 Univ., 75 (France); Cordoliani, Y.S. [Societe Francaise de Radioprotection, 92 - Fontenay-aux-Roses (France); Vrigneaud, J.M. [Assistance Publique - Hopitaux de Paris, 75 (France); Gauron, C. [Institut National de Recherche et de Securite, 75 - Paris (France); Petrequin, A.; Frison, D. [Association des Medecins du Travail des Salaries du Nucleaire (France); Jeannin, B. [Electricite de France (EDF), 75 - Paris (France); Charles, D. [Polinorsud (France); Carballeda, G. [cabinet Indigo Ergonomie, 33 - Merignac (France); Crouail, P. [Centre d' Etude sur l' Evaluation de la Protection dans le Domaine Nucleaire, 92 - Fontenay-aux-Roses (France); Valot, C. [IMASSA, 91 - Bretigny-sur-Orge (France)

    2006-07-01

    This day on the workstations studies for the workers follow-up, was organised by the research and health section. Devoted to the company doctors, for the competent persons in radiation protection, for the engineers of safety, it presented examples of methodologies and applications in the medical, industrial domain and the research, so contributing to a better understanding and an application of regulatory measures. The analysis of the workstation has to allow a reduction of the exposures and the risks and lead to the optimization of the medical follow-up. The agenda of this day included the different subjects as follow: evolution of the regulation in matter of demarcation of the regulated zones where the measures of workers protection are strengthened; presentation of the I.R.S.N. guide of help to the realization of a workstation study; implementation of a workstation study: case of radiology; the workstation studies in the research area; Is it necessary to impose the operational dosimetry in the services of radiodiagnostic? The experience feedback of a competent person in radiation protection (P.C.R.) in a hospital environment; radiation protection: elaboration of a good practices guide in medical field; the activities file in nuclear power plant: an evaluation tool of risks for the prevention. Methodological presentation and examples; insulated workstation study; the experience feedback of a provider; Contribution of the ergonomics to the determiners characterization in the ionizing radiation exposure situations;The workstations studies for the internal contamination in the fuel cycle facilities and the consideration of the results in the medical follow-up; R.E.L.I.R. necessity of workstation studies; the consideration of the human factor. (N.C.)

  2. Evaluation of DEC`s GIGAswitch for distributed parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Hutchins, J.; Brandt, J.

    1993-10-01

    One of Sandia`s research efforts is to reduce the end-to-end communication delay in a parallel-distributed computing environment. GIGAswitch is DEC`s implementation of a gigabit local area network based on switched FDDI technology. Using the GIGAswitch, the authors intend to minimize the medium access latency suffered by shared-medium FDDI technology. Experimental results show that the GIGAswitch adds 16.5 microseconds of switching and bridging delay to an end-to-end communication. Although the added latency causes a 1.8% throughput degradation and a 5% line efficiency degradation, the availability of dedicated bandwidth is much more than what is available to a workstation on a shared medium. For example, ten directly connected workstations each would have a dedicated bandwidth of 95 Mbps, but if they were sharing the FDDI bandwidth, each would have 10% of the total bandwidth, i.e., less than 10 Mbps. In addition, they have found that when there is no output port contention, the switch`s aggregate bandwidth will scale up to multiples of its port bandwidth. However, with output port contention, the throughput and latency performance suffered significantly. Their mathematical and simulation models indicate that the GIGAswitch line efficiency could be as low as 63% when there are nine input ports contending for the same output port. The data indicate that the delay introduced by contention at the server workstation is 50 times that introduced by the GIGAswitch. The authors conclude that the GIGAswitch meets the performance requirements of today`s high-end workstations and that the switched FDDI technology provides an alternative that utilizes existing workstation interfaces while increasing the aggregate bandwidth. However, because the speed of workstations is increasing by a factor of 2 every 1.5 years, the switched FDDI technology is only good as an interim solution.

  3. Modular high-temperature gas-cooled reactor simulation using parallel processors

    International Nuclear Information System (INIS)

    Ball, S.J.; Conklin, J.C.

    1989-01-01

    The MHPP (Modular HTGR Parallel Processor) code has been developed to simulate modular high-temperature gas-cooled reactor (MHTGR) transients and accidents. MHPP incorporates a very detailed model for predicting the dynamics of the reactor core, vessel, and cooling systems over a wide variety of scenarios ranging from expected transients to very-low-probability severe accidents. The simulations routines, which had originally been developed entirely as serial code, were readily adapted to parallel processing Fortran. The resulting parallelized simulation speed was enhanced significantly. Workstation interfaces are being developed to provide for user (operator) interaction. In this paper the benefits realized by adapting previous MHTGR codes to run on a parallel processor are discussed, along with results of typical accident analyses

  4. The Temple Translator's Workstation Project

    National Research Council Canada - National Science Library

    Vanni, Michelle; Zajac, Remi

    1996-01-01

    .... The Temple Translator's Workstation is incorporated into a Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machine- translation...

  5. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  6. The specification of Stampi, a message passing library for distributed parallel computing

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Takemiya, Hiroshi; Koide, Hiroshi

    2000-03-01

    At CCSE, Center for Promotion of Computational Science and Engineering, a new message passing library for heterogeneous and distributed parallel computing has been developed, and it is called as Stampi. Stampi enables us to communicate between any combination of parallel computers as well as workstations. Currently, a Stampi system is constructed from Stampi library and Stampi/Java. It provides functions to connect a Stampi application with not only those on COMPACS, COMplex Parallel Computer System, but also applets which work on WWW browsers. This report summarizes the specifications of Stampi and details the development of its system. (author)

  7. A High-Performance Parallel FDTD Method Enhanced by Using SSE Instruction Set

    Directory of Open Access Journals (Sweden)

    Dau-Chyrh Chang

    2012-01-01

    Full Text Available We introduce a hardware acceleration technique for the parallel finite difference time domain (FDTD method using the SSE (streaming (single instruction multiple data SIMD extensions instruction set. The implementation of SSE instruction set to parallel FDTD method has achieved the significant improvement on the simulation performance. The benchmarks of the SSE acceleration on both the multi-CPU workstation and computer cluster have demonstrated the advantages of (vector arithmetic logic unit VALU acceleration over GPU acceleration. Several engineering applications are employed to demonstrate the performance of parallel FDTD method enhanced by SSE instruction set.

  8. Parallel Calculations in LS-DYNA

    Science.gov (United States)

    Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey

    2017-11-01

    Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.

  9. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  10. The biomechanical and physiological effect of two dynamic workstations

    NARCIS (Netherlands)

    Botter, J.; Burford, E.M.; Commissaris, D.; Könemann, R.; Mastrigt, S.H.V.; Ellegast, R.P.

    2013-01-01

    The aim of this research paper was to investigate the effect, both biomechanically and physiologically, of two dynamic workstations currently available on the commercial market. The dynamic workstations tested, namely the Treadmill Desk by LifeSpan and the LifeBalance Station by RightAngle, were

  11. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  12. Design of a tritium decontamination workstation based on plasma cleaning

    International Nuclear Information System (INIS)

    Antoniazzi, A.B.; Shmayda, W.T.; Fishbien, B.F.

    1993-01-01

    A design for a tritium decontamination workstation based on plasma cleaning is presented. The activity of tritiated surfaces are significantly reduced through plasma-surface interactions within the workstation. Such a workstation in a tritium environment can routinely be used to decontaminate tritiated tools and components. The main advantage of such a station is the lack of low level tritiated liquid waste. Gaseous tritiated species are the waste products with can with present technology be separated and contained

  13. The scheme and implementing of workstation configuration for medical imaging information system

    International Nuclear Information System (INIS)

    Tao Yonghao; Miao Jingtao

    2002-01-01

    Objective: To discuss the scheme and implementing for workstation configuration of medical imaging information system which would be adapted to the practice situation of China. Methods: The workstations were logically divided into PACS workstations and RIS workstations, the former applied to three kinds of diagnostic practice: the small matrix images, large matrix images, and high resolution gray scale display application, and the latter consisted of many different models which depended upon the usage and function process. Results: A dual screen configuration for image diagnostic workstation integrated the image viewing and reporting procedure physically, while the small matrix images as CT or MR were operated on 17 in (1 in = 2.54 cm) color monitors, the conventional X-ray diagnostic procedure was implemented based on 21 in color monitors or portrait format gray scale 2 K by 2.5 K monitors. All other RIS workstations not involved in image process were set up with a common PC configuration. Conclusion: The essential principle for designing a workstation scheme of medical imaging information system should satisfy the basic requirements of medical image diagnosis and fit into the available investment situation

  14. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  15. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  16. Modelling of Energy Expenditure at Welding Workstations: Effect of ...

    African Journals Online (AJOL)

    The welding workstation usually generates intense heat during operations, which may affect the welder's health if not properly controlled, and can also affect the performance of the welder at work. Consequently, effort to control the conditions of the welding workstation is essential, and is therefore pursued in this paper.

  17. Visualization of biomedical image data and irradiation planning using a parallel computing system

    International Nuclear Information System (INIS)

    Lehrig, R.

    1991-01-01

    The contribution explains the development of a novel, low-cost workstation for the processing of biomedical tomographic data sequences. The workstation was to allow both graphical display of the data and implementation of modelling software for irradiation planning, especially for calculation of dose distributions on the basis of the measured tomogram data. The system developed according to these criteria is a parallel computing system which performs secondary, two-dimensional image reconstructions irrespective of the imaging direction of the original tomographic scans. Three-dimensional image reconstructions can be generated from any direction of view, with random selection of sections of the scanned object. (orig./MM) With 69 figs., 2 tabs [de

  18. Workout at work: laboratory test of psychological and performance outcomes of active workstations.

    Science.gov (United States)

    Sliter, Michael; Yuan, Zhenyu

    2015-04-01

    With growing concerns over the obesity epidemic in the United States and other developed countries, many organizations have taken steps to incorporate healthy workplace practices. However, most workers are still sedentary throughout the day--a major contributor to individual weight gain. The current study sought to gather preliminary evidence of the efficacy of active workstations, which are a possible intervention that could increase employees' physical activity while they are working. We conducted an experimental study, in which boredom, task satisfaction, stress, arousal, and performance were evaluated and compared across 4 randomly assigned conditions: seated workstation, standing workstation, cycling workstation, and walking workstation. Additionally, body mass index (BMI) and exercise habits were examined as moderators to determine whether differences in these variables would relate to increased benefits in active conditions. The results (n = 180) showed general support for the benefits of walking workstations, whereby participants in the walking condition had higher satisfaction and arousal and experienced less boredom and stress than those in the passive conditions. Cycling workstations, on the other hand, tended to relate to reduced satisfaction and performance when compared with other conditions. The moderators did not impact these relationships, indicating that walking workstations might have psychological benefits to individuals, regardless of BMI and exercise habits. The results of this study are a preliminary step in understanding the work implications of active workstations. (c) 2015 APA, all rights reserved).

  19. Parallel computers and three-dimensional computational electromagnetics

    International Nuclear Information System (INIS)

    Madsen, N.K.

    1994-01-01

    The authors have continued to enhance their ability to use new massively parallel processing computers to solve time-domain electromagnetic problems. New vectorization techniques have improved the performance of their code DSI3D by factors of 5 to 15, depending on the computer used. New radiation boundary conditions and far-field transformations now allow the computation of radar cross-section values for complex objects. A new parallel-data extraction code has been developed that allows the extraction of data subsets from large problems, which have been run on parallel computers, for subsequent post-processing on workstations with enhanced graphics capabilities. A new charged-particle-pushing version of DSI3D is under development. Finally, DSI3D has become a focal point for several new Cooperative Research and Development Agreement activities with industrial companies such as Lockheed Advanced Development Company, Varian, Hughes Electron Dynamics Division, General Atomic, and Cray

  20. The Impact of Active Workstations on Workplace Productivity and Performance: A Systematic Review.

    Science.gov (United States)

    Ojo, Samson O; Bailey, Daniel P; Chater, Angel M; Hewson, David J

    2018-02-27

    Active workstations have been recommended for reducing sedentary behavior in the workplace. It is important to understand if the use of these workstations has an impact on worker productivity. The aim of this systematic review was to examine the effect of active workstations on workplace productivity and performance. A total of 3303 articles were initially identified by a systematic search and seven articles met eligibility criteria for inclusion. A quality appraisal was conducted to assess risk of bias, confounding, internal and external validity, and reporting. Most of the studies reported cognitive performance as opposed to productivity. Five studies assessed cognitive performance during use of an active workstation, usually in a single session. Sit-stand desks had no detrimental effect on performance, however, some studies with treadmill and cycling workstations identified potential decreases in performance. Many of the studies lacked the power required to achieve statistical significance. Three studies assessed workplace productivity after prolonged use of an active workstation for between 12 and 52 weeks. These studies reported no significant effect on productivity. Active workstations do not appear to decrease workplace performance.

  1. Compiler and Runtime Support for Programming in Adaptive Parallel Environments

    Science.gov (United States)

    1998-10-15

    noother job is waiting for resources, and use a smaller number of processors when other jobs needresources. Setia et al. [15, 20] have shown that such...15] Vijay K. Naik, Sanjeev Setia , and Mark Squillante. Performance analysis of job scheduling policiesin parallel supercomputing environments. In...on networks ofheterogeneous workstations. Technical Report CSE-94-012, Oregon Graduate Institute of Scienceand Technology, 1994.[20] Sanjeev Setia

  2. SCWEB, Scientific Workstation Evaluation Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)

    1988-06-16

    1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program

  3. A high-speed linear algebra library with automatic parallelism

    Science.gov (United States)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  4. Workstations take over conceptual design

    Science.gov (United States)

    Kidwell, George H.

    1987-01-01

    Workstations provide sufficient computing memory and speed for early evaluations of aircraft design alternatives to identify those worthy of further study. It is recommended that the programming of such machines permit integrated calculations of the configuration and performance analysis of new concepts, along with the capability of changing up to 100 variables at a time and swiftly viewing the results. Computations can be augmented through links to mainframes and supercomputers. Programming, particularly debugging operations, are enhanced by the capability of working with one program line at a time and having available on-screen error indices. Workstation networks permit on-line communication among users and with persons and computers outside the facility. Application of the capabilities is illustrated through a description of NASA-Ames design efforts for an oblique wing for a jet performed on a MicroVAX network.

  5. ARCIMBOLDO_LITE: single-workstation implementation and use.

    Science.gov (United States)

    Sammito, Massimo; Millán, Claudia; Frieske, Dawid; Rodríguez-Freire, Eloy; Borges, Rafael J; Usón, Isabel

    2015-09-01

    ARCIMBOLDO solves the phase problem at resolutions of around 2 Å or better through massive combination of small fragments and density modification. For complex structures, this imposes a need for a powerful grid where calculations can be distributed, but for structures with up to 200 amino acids in the asymmetric unit a single workstation may suffice. The use and performance of the single-workstation implementation, ARCIMBOLDO_LITE, on a pool of test structures with 40-120 amino acids and resolutions between 0.54 and 2.2 Å is described. Inbuilt polyalanine helices and iron cofactors are used as search fragments. ARCIMBOLDO_BORGES can also run on a single workstation to solve structures in this test set using precomputed libraries of local folds. The results of this study have been incorporated into an automated, resolution- and hardware-dependent parameterization. ARCIMBOLDO has been thoroughly rewritten and three binaries are now available: ARCIMBOLDO_LITE, ARCIMBOLDO_SHREDDER and ARCIMBOLDO_BORGES. The programs and libraries can be downloaded from http://chango.ibmb.csic.es/ARCIMBOLDO_LITE.

  6. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  7. Out-of-core nuclear fuel cycle optimization utilizing an engineering workstation

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Comes, S.A.

    1986-01-01

    Within the past several years, rapid advances in computer technology have resulted in substantial increases in their performance. The net effect is that problems that could previously only be executed on mainframe computers can now be executed on micro- and minicomputers. The authors are interested in developing an engineering workstation for nuclear fuel management applications. An engineering workstation is defined as a microcomputer with enhanced graphics and communication capabilities. Current fuel management applications range from using workstations as front-end/back-end processors for mainframe computers to completing fuel management scoping calculations. More recently, interest in using workstations for final in-core design calculations has appeared. The authors have used the VAX 11/750 minicomputer, which is not truly an engineering workstation but has comparable performance, to complete both in-core and out-of-core fuel management scoping studies. In this paper, the authors concentrate on our out-of-core research. While much previous work in this area has dealt with decisions concerned with equilibrium cycles, the current project addresses the more realistic situation of nonequilibrium cycles

  8. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  9. The Impact of Active Workstations on Workplace Productivity and Performance: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Samson O. Ojo

    2018-02-01

    Full Text Available Active workstations have been recommended for reducing sedentary behavior in the workplace. It is important to understand if the use of these workstations has an impact on worker productivity. The aim of this systematic review was to examine the effect of active workstations on workplace productivity and performance. A total of 3303 articles were initially identified by a systematic search and seven articles met eligibility criteria for inclusion. A quality appraisal was conducted to assess risk of bias, confounding, internal and external validity, and reporting. Most of the studies reported cognitive performance as opposed to productivity. Five studies assessed cognitive performance during use of an active workstation, usually in a single session. Sit-stand desks had no detrimental effect on performance, however, some studies with treadmill and cycling workstations identified potential decreases in performance. Many of the studies lacked the power required to achieve statistical significance. Three studies assessed workplace productivity after prolonged use of an active workstation for between 12 and 52 weeks. These studies reported no significant effect on productivity. Active workstations do not appear to decrease workplace performance.

  10. The transition of GTDS to the Unix workstation environment

    Science.gov (United States)

    Carter, D.; Metzinger, R.; Proulx, R.; Cefola, P.

    1995-01-01

    Future Flight Dynamics systems should take advantage of the possibilities provided by current and future generations of low-cost, high performance workstation computing environments with Graphical User Interface. The port of the existing mainframe Flight Dynamics systems to the workstation environment offers an economic approach for combining the tremendous engineering heritage that has been encapsulated in these systems with the advantages of the new computing environments. This paper will describe the successful transition of the Draper Laboratory R&D version of GTDS (Goddard Trajectory Determination System) from the IBM Mainframe to the Unix workstation environment. The approach will be a mix of historical timeline notes, descriptions of the technical problems overcome, and descriptions of associated SQA (software quality assurance) issues.

  11. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  12. Initial experience with a nuclear medicine viewing workstation

    Science.gov (United States)

    Witt, Robert M.; Burt, Robert W.

    1992-07-01

    Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.

  13. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  14. Treatment planning in radiosurgery: parallel Monte Carlo simulation software

    Energy Technology Data Exchange (ETDEWEB)

    Scielzo, G [Galliera Hospitals, Genova (Italy). Dept. of Hospital Physics; Grillo Ruggieri, F [Galliera Hospitals, Genova (Italy) Dept. for Radiation Therapy; Modesti, M; Felici, R [Electronic Data System, Rome (Italy); Surridge, M [University of South Hampton (United Kingdom). Parallel Apllication Centre

    1995-12-01

    The main objective of this research was to evaluate the possibility of direct Monte Carlo simulation for accurate dosimetry with short computation time. We made us of: graphics workstation, linear accelerator, water, PMMA and anthropomorphic phantoms, for validation purposes; ionometric, film and thermo-luminescent techniques, for dosimetry; treatment planning system for comparison. Benchmarking results suggest that short computing times can be obtained with use of the parallel version of EGS4 that was developed. Parallelism was obtained assigning simulation incident photons to separate processors, and the development of a parallel random number generator was necessary. Validation consisted in: phantom irradiation, comparison of predicted and measured values good agreement in PDD and dose profiles. Experiments on anthropomorphic phantoms (with inhomogeneities) were carried out, and these values are being compared with results obtained with the conventional treatment planning system.

  15. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed

  16. ERGONOMICs IN THE COMPUTER WORKsTATION

    African Journals Online (AJOL)

    2010-09-09

    Sep 9, 2010 ... in relation to their work environment and working surroundings. ... prolonged computer usage and application of ergonomics in the workstation. Design:One hundred and .... Occupational Health and Safety Services should.

  17. Real-time on a standard UNIX workstation?

    International Nuclear Information System (INIS)

    Glanzman, T.

    1992-09-01

    This is a report of an ongoing R ampersand D project which is investigating the use of standard UNIX workstations for the real-time data acquisition from a major new experimental initiative, the SLAC B Factory (PEP II). For this work an IBM RS/6000 workstation running the AIX operating system is used. Real-time extensions to the UNIX operating system are explored and performance measured. These extensions comprise a set of AIX-specific and POSIX-compliant system services. Benchmark comparisons are made with embedded processor technologies. Results are presented for a simple prototype on-line system for laboratory-testing of a new prototype drift chamber

  18. Parallel computing in genomic research: advances and applications.

    Science.gov (United States)

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  19. Study on High Performance of MPI-Based Parallel FDTD from WorkStation to Super Computer Platform

    Directory of Open Access Journals (Sweden)

    Z. L. He

    2012-01-01

    Full Text Available Parallel FDTD method is applied to analyze the electromagnetic problems of the electrically large targets on super computer. It is well known that the more the number of processors the less computing time consumed. Nevertheless, with the same number of processors, computing efficiency is affected by the scheme of the MPI virtual topology. Then, the influence of different virtual topology schemes on parallel performance of parallel FDTD is studied in detail. The general rules are presented on how to obtain the highest efficiency of parallel FDTD algorithm by optimizing MPI virtual topology. To show the validity of the presented method, several numerical results are given in the later part. Various comparisons are made and some useful conclusions are summarized.

  20. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  1. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  2. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper we present a simple but efficient parallel algorithm based on the message passing host/node programing model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, witch is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SP1 and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SP1. Because of heterogeneity of the workstation network, we did ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors. (author). 5 refs., 6 figs., 2 tabs

  3. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors

  4. Development of parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Sigmar, D.J.; Koniges, A.E.

    1996-01-01

    We report on our ongoing development of the 3D Fokker-Planck code ALLA for a highly collisional scrape-off-layer (SOL) plasma. A SOL with strong gradients of density and temperature in the spatial dimension is modeled. Our method is based on a 3-D adaptive grid (in space, magnitude of the velocity, and cosine of the pitch angle) and a second order conservative scheme. Note that the grid size is typically 100 x 257 x 65 nodes. It was shown in our previous work that only these capabilities make it possible to benchmark a 3D code against a spatially-dependent self-similar solution of a kinetic equation with the Landau collision term. In the present work we show results of a more precise benchmarking against the exact solutions of the kinetic equation using a new parallel code ALLAp with an improved method of parallelization and a modified boundary condition at the plasma edge. We also report first results from the code parallelization using Message Passing Interface for a Massively Parallel CRI T3D platform. We evaluate the ALLAp code performance versus the number of T3D processors used and compare its efficiency against a Work/Data Sharing parallelization scheme and a workstation version

  5. A PC/workstation cluster computing environment for reservoir engineering simulation applications

    International Nuclear Information System (INIS)

    Hermes, C.E.; Koo, J.

    1995-01-01

    Like the rest of the petroleum industry, Texaco has been transferring its applications and databases from mainframes to PC's and workstations. This transition has been very positive because it provides an environment for integrating applications, increases end-user productivity, and in general reduces overall computing costs. On the down side, the transition typically results in a dramatic increase in workstation purchases and raises concerns regarding the cost and effective management of computing resources in this new environment. The workstation transition also places the user in a Unix computing environment which, to say the least, can be quite frustrating to learn and to use. This paper describes the approach, philosophy, architecture, and current status of the new reservoir engineering/simulation computing environment developed at Texaco's E and P Technology Dept. (EPTD) in Houston. The environment is representative of those under development at several other large oil companies and is based on a cluster of IBM and Silicon Graphics Intl. (SGI) workstations connected by a fiber-optics communications network and engineering PC's connected to local area networks, or Ethernets. Because computing resources and software licenses are shared among a group of users, the new environment enables the company to get more out of its investments in workstation hardware and software

  6. Real-time monitoring/emergency response modeling workstation for a tritium facility

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sims, J.M.; Baskett, R.L.

    1993-01-01

    At Lawrence Livermore National Laboratory (LLNL) we have developed a real-time system to monitor two stacks on our tritium handling facility. The monitors transmit the stack data to a workstation, which computes a three-dimensional numerical model of atmospheric dispersion. The workstation also collects surface and upper air data from meteorological towers and a sodar. The complex meteorological and terrain setting in the Livermore Valley demands more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion than afforded by Gaussian models. We experience both mountain valley and sea breeze flows. To address these complexities, we have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on the workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 yr within the U.S. Department of Energy's Atmospheric Release Advisory Capability (ARAC) project. Faster workstations and real-time instruments allow utilization of more complex three-dimensional models, which provides a foundation for building a real-time monitoring and emergency response workstation for a tritium facility. The stack monitors are two ion chambers per stack

  7. The use of bicycle workstations to increase physical activity in secondary classrooms

    Directory of Open Access Journals (Sweden)

    Alicia Fedewa

    2017-11-01

    Full Text Available Background To date, the majority of interventions have implemented classroom-based physical activity (PA at the elementary level; however, there is both the potential and need to explore student outcomes at high-school level as well, given that very few studies have incorporated classroom-based PA interventions for adolescents. One exception has been the use of bicycle workstations within secondary classrooms. Using bicycle workstations in lieu of traditional chairs in a high school setting shows promise for enhancing adolescents’ physical activity during the school day. Participants and procedure The present study explored the effects of integrating bicycle workstations into a secondary classroom setting for four months in a sample of 115 adolescents using an A-B-A-B withdrawal design. The study took place in one Advanced Placement English classroom across five groups of students. Physical activity outcomes included average heart rate, and caloric expenditure. Behavioural outcomes included percentage of on-task/off-task behaviour and number of teacher prompts in redirecting off-task behaviour. Feasibility and acceptability data of using the bicycle workstations were also collected. Results Findings showed significant improvements in physical activity as measured by heart rate and caloric expenditure, although heart rate percentage remained in the low intensity range when students were on the bicycle workstations. No effects were found on students’ on-task behaviour when using the bicycle workstations. Overall, students found the bikes acceptable to use but noted disadvantages of them as well. Conclusions Using bicycle workstations in high-school settings appears promising for enhancing low-intensity physical activity among adolescents. The limitations of the present study and implications for physical activity interventions in secondary schools are discussed.

  8. Communication System Simulation Workstation

    Science.gov (United States)

    1990-01-30

    SIMULATION WORKSTATION Grant # AFOSR-89-0117 Submitted to: DEPARTMENT OF AIR FORCE AIR FORCE OFFICE OF SCIENTIFIC RESEARCH BOLLING AIR FORCE BASE , DC...CORRESPONOENCiA. PAGUETES. CONIIUCE. r ACTUHA. Y CONOCIMIENTO DE EMBAROUES. THIS PURCHASE ORDER [,rccion Cablegralica .1,1 Addrv~s NO MUST APPEAR ON ALL...sub-band decomposition was developed, PKX, based on the modulation of a single prototype filter. This technicde was introduced first by Nassbauner and

  9. Design and analysis of wudu’ (ablution) workstation for elderly in Malaysia

    Science.gov (United States)

    Aman, A.; Dawal, S. Z. M.; Rahman, N. I. A.

    2017-06-01

    Wudu’ (Ablution) workstation is one of the facilities used by most Muslims in all categories. At present, there are numbers of design guidelines for praying facilities but still lacking on wudu’ (ablution) area specification especially or elderly. Thus, It is timely to develop an ergonomic wudu’ workstation for elderly to perform ablution independently and confidently. This study was conducted to design an ergonomic ablution unit for the Muslim’s elderly in Malaysia. An ablution workstation was designed based on elderly anthropometric dimensions and was then analyse using CATIA V5R21 for posture investigation using RULAs. The results of the study has identified significant anthropometric dimensions in designing wudu’ (ablution) workstation for elderly people. This study can be considered as preliminary study for the development of an ergonomic ablution design for elderly. This effort will become one of the significant social contributions to our elderly population in developing our nation holistically.

  10. A real-time data-acquisition and analysis system with distributed UNIX workstations

    International Nuclear Information System (INIS)

    Yamashita, H.; Miyamoto, K.; Maruyama, K.; Hirosawa, H.; Nakayoshi, K.; Emura, T.; Sumi, Y.

    1996-01-01

    A compact data-acquisition system using three RISC/UNIX TM workstations (SUN TM /SPARCstation TM ) with real-time capabilities of monitoring and analysis has been developed for the study of photonuclear reactions with the large-acceptance spectrometer TAGX. One workstation acquires data from memory modules in the front-end electronics (CAMAC and TKO) with a maximum speed of 300 Kbytes/s, where data size times instantaneous rate is 1 Kbyte x 300 Hz. Another workstation, which has real-time capability for run monitoring, gets the data with a buffer manager called NOVA. The third workstation analyzes the data and reconstructs the event. In addition to a general hardware and software description, priority settings and run control by shell scripts are described. This system has recently been used successfully in a two month long experiment. (orig.)

  11. Parallelization of the MAAP-A code neutronics/thermal hydraulics coupling

    International Nuclear Information System (INIS)

    Froehle, P.H.; Wei, T.Y.C.; Weber, D.P.; Henry, R.E.

    1998-01-01

    A major new feature, one-dimensional space-time kinetics, has been added to a developmental version of the MAAP code through the introduction of the DIF3D-K module. This code is referred to as MAAP-A. To reduce the overall job time required, a capability has been provided to run the MAAP-A code in parallel. The parallel version of MAAP-A utilizes two machines running in parallel, with the DIF3D-K module executing on one machine and the rest of the MAAP-A code executing on the other machine. Timing results obtained during the development of the capability indicate that reductions in time of 30--40% are possible. The parallel version can be run on two SPARC 20 (SUN OS 5.5) workstations connected through the ethernet. MPI (Message Passing Interface standard) needs to be implemented on the machines. If necessary the parallel version can also be run on only one machine. The results obtained running in this one-machine mode identically match the results obtained from the serial version of the code

  12. Parallel processing of two-dimensional Sn transport calculations

    International Nuclear Information System (INIS)

    Uematsu, M.

    1997-01-01

    A parallel processing method for the two-dimensional S n transport code DOT3.5 has been developed to achieve a drastic reduction in computation time. In the proposed method, parallelization is achieved with angular domain decomposition and/or space domain decomposition. The calculational speed of parallel processing by angular domain decomposition is largely influenced by frequent communications between processing elements. To assess parallelization efficiency, sample problems with up to 32 x 32 spatial meshes were solved with a Sun workstation using the PVM message-passing library. As a result, parallel calculation using 16 processing elements, for example, was found to be nine times as fast as that with one processing element. As for parallel processing by geometry segmentation, the influence of processing element communications on computation time is small; however, discontinuity at the segment boundary degrades convergence speed. To accelerate the convergence, an alternate sweep of angular flux in conjunction with space domain decomposition and a two-step rescaling method consisting of segmentwise rescaling and ordinary pointwise rescaling have been developed. By applying the developed method, the number of iterations needed to obtain a converged flux solution was reduced by a factor of 2. As a result, parallel calculation using 16 processing elements was found to be 5.98 times as fast as the original DOT3.5 calculation

  13. A methodology to emulate and evaluate a productive virtual workstation

    Science.gov (United States)

    Krubsack, David; Haberman, David

    1992-01-01

    The Advanced Display and Computer Augmented Control (ADCACS) Program at ACT is sponsored by NASA Ames to investigate the broad field of technologies which must be combined to design a 'virtual' workstation for the Space Station Freedom. This program is progressing in several areas and resulted in the definition of requirements for a workstation. A unique combination of technologies at the ACT Laboratory have been networked to effectively create an experimental environment. This experimental environment allows the integration of nonconventional input devices with a high power graphics engine within the framework of an expert system shell which coordinates the heterogeneous inputs with the 'virtual' presentation. The flexibility of the workstation is evolved as experiments are designed and conducted to evaluate the condition descriptions and rule sets of the expert system shell and its effectiveness in driving the graphics engine. Workstation productivity has been defined by the achievable performance in the emulator of the calibrated 'sensitivity' of input devices, the graphics presentation, the possible optical enhancements to achieve a wide field of view color image and the flexibility of conditional descriptions in the expert system shell in adapting to prototype problems.

  14. Supervisory Control Technique For An Assembly Workstation As A Dynamic Discrete Event System

    Directory of Open Access Journals (Sweden)

    Daniela Cristina CERNEGA

    2001-12-01

    Full Text Available This paper proposes a control problem statement in the framework of supervisory control technique for the assembly workstations. A desired behaviour of an assembly workstation is analysed. The behaviour of such a workstation is cyclic and some linguistic properties are established. In this paper, it is proposed an algorithm for the computation of the supremal controllable language of the closed system desired language. Copyright © 2001 IFAC.

  15. Nuclear plant analyzer desktop workstation

    International Nuclear Information System (INIS)

    Beelman, R.J.

    1990-01-01

    In 1983 the U.S. Nuclear Regulatory Commission (USNRC) commissioned the Idaho National Engineering Laboratory (INEL) to develop a Nuclear Plant Analyzer (NPA). The NPA was envisioned as a graphical aid to assist reactor safety analysts in comprehending the results of thermal-hydraulic code calculations. The development was to proceed in three distinct phases culminating in a desktop reactor safety workstation. The desktop NPA is now complete. The desktop NPA is a microcomputer based reactor transient simulation, visualization and analysis tool developed at INEL to assist an analyst in evaluating the transient behavior of nuclear power plants by means of graphic displays. The NPA desktop workstation integrates advanced reactor simulation codes with online computer graphics allowing reactor plant transient simulation and graphical presentation of results. The graphics software, written exclusively in ANSI standard C and FORTRAN 77 and implemented over the UNIX/X-windows operating environment, is modular and is designed to interface to the NRC's suite of advanced thermal-hydraulic codes to the extent allowed by that code. Currently, full, interactive, desktop NPA capabilities are realized only with RELAP5

  16. System engineering workstations - critical tool in addressing waste storage, transportation, or disposal

    International Nuclear Information System (INIS)

    Mar, B.W.

    1987-01-01

    The ability to create, evaluate, operate, and manage waste storage, transportation, and disposal systems (WSTDSs) is greatly enhanced when automated tools are available to support the generation of the voluminous mass of documents and data associated with the system engineering of the program. A system engineering workstation is an optimized set of hardware and software that provides such automated tools to those performing system engineering functions. This paper explores the functions that need to be performed by a WSTDS system engineering workstation. While the latter stages of a major WSTDS may require a mainframe computer and specialized software systems, most of the required system engineering functions can be supported by a system engineering workstation consisting of a personnel computer and commercial software. These findings suggest system engineering workstations for WSTDS applications will cost less than $5000 per unit, and the payback on the investment can be realized in a few months. In most cases the major cost element is not the capital costs of hardware or software, but the cost to train or retrain the system engineers in the use of the workstation and to ensure that the system engineering functions are properly conducted

  17. Performance of MPI parallel processing implemented by MCNP5/ MCNPX for criticality benchmark problems

    International Nuclear Information System (INIS)

    Mark Dennis Usang; Mohd Hairie Rabir; Mohd Amin Sharifuldin Salleh; Mohamad Puad Abu

    2012-01-01

    MPI parallelism are implemented on a SUN Workstation for running MCNPX and on the High Performance Computing Facility (HPC) for running MCNP5. 23 input less obtained from MCNP Criticality Validation Suite are utilized for the purpose of evaluating the amount of speed up achievable by using the parallel capabilities of MPI. More importantly, we will study the economics of using more processors and the type of problem where the performance gain are obvious. This is important to enable better practices of resource sharing especially for the HPC facilities processing time. Future endeavours in this direction might even reveal clues for best MCNP5/ MCNPX coding practices for optimum performance of MPI parallelisms. (author)

  18. Impact of workstations on criticality analyses at ABB combustion engineering

    International Nuclear Information System (INIS)

    Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F.

    1993-01-01

    During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bit word size and function as the computer servers and network administrative CPUS, providing a virtual memory system

  19. Evaluation of PC-based diagnostic radiology workstations

    International Nuclear Information System (INIS)

    Pollack, T.; Brueggenwerth, G.; Kaulfuss, K.; Niederlag, W.

    2000-01-01

    Material and Methods: During February 1999 and September 1999 medical users at the hospital Dresden-Friedrichstadt Germany had tested 7 types of radiology diagnostic workstations. Two types of test methods were used: In test type 1 ergonomic and handling functions were evaluated impartial according to 78 selected user requirements. In test type 2 radiologists and radiographers (3+4) performed 23 work flow steps with a subjectively evaluation. Results: By using a progressive rating no product could fully meet the user requirements. As a result of the summary evaluation for test 1 and test 2 the following compliance rating was calculated for the different products: Rad Works (66%), Magic View (63%), ID-Report (58%), Impax 3000 (53%), Medical Workstation (52%), Pathspeed (46%) and Autorad (39%). (orig.) [de

  20. Files for workstations with ionizing radiation risks: variation in the use of gamma densitometers

    International Nuclear Information System (INIS)

    Tournadre, A.

    2008-01-01

    After a brief presentation of the different gamma-densitometers proposed by MLPC to measure roadway density, and having outlined the support role of the provider, the author describes the form and content of workstation files for workstations exhibiting a risk related to ionizing radiation. He gives an analytical overview of dose calculation: analysis of instrument use phases, exposure duration, dose rates and way of introducing these dose rates in the workstation file. He formulates how different procedures are to be followed by the radiation protection expert within the company. He outlines that workstation files are very useful as information feedback tool

  1. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    Science.gov (United States)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  2. Argo workstation: a key component of operational oceanography

    Science.gov (United States)

    Dong, Mingmei; Xu, Shanshan; Miao, Qingsheng; Yue, Xinyang; Lu, Jiawei; Yang, Yang

    2018-02-01

    Operational oceanography requires the quantity, quality, and availability of data set and the timeliness and effectiveness of data products. Without steady and strong operational system supporting, operational oceanography will never be proceeded far. In this paper we describe an integrated platform named Argo Workstation. It operates as a data processing and management system, capable of data collection, automatic data quality control, visualized data check, statistical data search and data service. After it is set up, Argo workstation provides global high quality Argo data to users every day timely and effectively. It has not only played a key role in operational oceanography but also set up an example for operational system.

  3. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  4. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  5. Migration of nuclear criticality safety software from a mainframe to a workstation environment

    International Nuclear Information System (INIS)

    Bowie, L.J.; Robinson, R.C.; Cain, V.R.

    1993-01-01

    The Nuclear Criticality Safety Department (NCSD), Oak Ridge Y-12 Plant has undergone the transition of executing the Martin Marietta Energy Systems Nuclear Criticality Safety Software (NCSS) on IBM mainframes to a Hewlett-Packard (HP) 9000/730 workstation (NCSSHP). NCSSHP contains the following configuration controlled modules and cross-section libraries: BONAMI, CSAS, GEOMCHY, ICE, KENO IV, KENO Va, MODIIFY, NITAWL SCALE, SLTBLIB, XSDRN, UNIXLIB, albedos library, weights library, 16-Group HANSEN-ROACH master library, 27-Group ENDF/B-IV master library, and standard composition library. This paper will discuss the method used to choose the workstation, the hardware setup of the chosen workstation, an overview of Y-12 software quality assurance and configuration control methodology, code validation, difficulties encountered in migrating the codes, and advantages to migrating to a workstation environment

  6. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  7. Workstations as consoles for the CERN-PS complex, setting-up the environment

    International Nuclear Information System (INIS)

    Antonsanti, P.; Arruat, M.; Bouche, J.M.; Cons, L.; Deloose, Y.; Di Maio, F.

    1992-01-01

    Within the framework of the rejuvenation project of the CERN control systems, commercial workstations have to replace existing home-designed operator consoles. RISC-based workstations with UNIX, X-window TM and OSF/Motif TM have been introduced for the control of the PS complex. The first versions of general functionalities like synoptic display, program selection and control panels have been implemented and the first large scale application has been realized. This paper describes the different components of the workstation environment for the implementation of the applications. The focus is on the set of tools which have been used, developed or integrated, and on how we plan to make them evolve. (author)

  8. Non-contact methods for NDT of aeronautical structures : An image processing workstation for thermography

    OpenAIRE

    Azzarelli, Luciano; Chimenti, Massimo; Salvetti, Ovidio

    1992-01-01

    The main goals of the Istituto di Elaborazione della Informazione in Task 4., Subtasks 4.3.1 (Image Processing) and 4.3.2 (Workstation Architecture) were the study of thermograms features, the design of the architecture of a customized workstation and the project of specialized algorithms for thermal image analysis. Thermograms features pertain to data acquisition, data archiving and data processing; following general study some basic requirements for the workstation were defined. "Data acqui...

  9. Shoulder girdle muscle activity and fatigue in traditional and improved design carpet weaving workstations.

    Science.gov (United States)

    Allahyari, Teimour; Mortazavi, Narges; Khalkhali, Hamid Reza; Sanjari, Mohammad Ali

    2016-01-01

    Work-related musculoskeletal disorders in the neck and shoulder regions are common among carpet weavers. Working for prolonged hours in a static and awkward posture could result in an increased muscle activity and may lead to musculoskeletal disorders. Ergonomic workstation improvements can reduce muscle fatigue and the risk of musculoskeletal disorders. The aim of this study is to assess and to compare upper trapezius and middle deltoid muscle activity in 2 traditional and improved design carpet weaving workstations. These 2 workstations were simulated in a laboratory and 12 women carpet weavers worked for 3 h. Electromyography (EMG) signals were recorded during work in bilateral upper trapezius and bilateral middle deltoid. The root mean square (RMS) and median frequency (MF) values were calculated and used to assess muscle load and fatigue. Repeated measure ANOVA was performed to assess the effect of independent variables on muscular activity and fatigue. The participants were asked to report shoulder region fatigue on the Borg's Category-Ratio scale (Borg CR-10). Root mean square values in workstation A are significantly higher than in workstation B. Furthermore, EMG amplitude was higher in bilateral trapezius than in bilateral deltoid. However, muscle fatigue was not observed in any of the workstations. The results of the study revealed that muscle load in a traditional workstation was high, but fatigue was not observed. Further studies investigating other muscles involved in carpet weaving tasks are recommended. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  10. Energy-efficiency based classification of the manufacturing workstation

    Science.gov (United States)

    Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.

    2017-08-01

    EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.

  11. 40 CFR 86.1312-2007 - Filter stabilization and microbalance workstation environmental conditions, microbalance...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Filter stabilization and microbalance workstation environmental conditions, microbalance specifications, and particulate matter filter handling and... Particulate Exhaust Test Procedures § 86.1312-2007 Filter stabilization and microbalance workstation...

  12. (Nearly) portable PIC code for parallel computers

    International Nuclear Information System (INIS)

    Decyk, V.K.

    1993-01-01

    As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes

  13. High-performance floating-point image computing workstation for medical applications

    Science.gov (United States)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  14. BioPhotonics Workstation: 3D interactive manipulation, observation and characterization

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    2011-01-01

    In ppo.dk we have invented the BioPhotonics Workstation to be applied in 3D research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and new materials.......In ppo.dk we have invented the BioPhotonics Workstation to be applied in 3D research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and new materials....

  15. The impact of sit-stand office workstations on worker discomfort and productivity: a review.

    Science.gov (United States)

    Karakolis, Thomas; Callaghan, Jack P

    2014-05-01

    This review examines the effectiveness of sit-stand workstations at reducing worker discomfort without causing a decrease in productivity. Four databases were searched for studies on sit-stand workstations, and five selection criteria were used to identify appropriate articles. Fourteen articles were identified that met at least three of the five selection criteria. Seven of the identified studies reported either local, whole body or both local and whole body subjective discomfort scores. Six of these studies indicated implementing sit-stand workstations in an office environment led to lower levels of reported subjective discomfort (three of which were statistically significant). Therefore, this review concluded that sit-stand workstations are likely effective in reducing perceived discomfort. Eight of the identified studies reported a productivity outcome. Three of these studies reported an increase in productivity during sit-stand work, four reported no affect on productivity, and one reported mixed productivity results. Therefore, this review concluded that sit-stand workstations do not cause a decrease in productivity. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  16. Direct and iterative algorithms for the parallel solution of the one-dimensional macroscopic Navier-Stokes equations

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1986-01-01

    Current efforts are under way to develop and evaluate numerical algorithms for the parallel solution of the large sparse matrix equations associated with the finite difference representation of the macroscopic Navier-Stokes equations. Previous work has shown that these equations can be cast into smaller coupled matrix equations suitable for solution utilizing multiple computer processors operating in parallel. The individual processors themselves may exhibit parallelism through the use of vector pipelines. This wor, has concentrated on the one-dimensional drift flux form of the Navier-Stokes equations. Direct and iterative algorithms that may be suitable for implementation on parallel computer architectures are evaluated in terms of accuracy and overall execution speed. This work has application to engineering and training simulations, on-line process control systems, and engineering workstations where increased computational speeds are required

  17. Xyce parallel electronic simulator : reference guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

  18. Parallel computing in genomic research: advances and applications

    Directory of Open Access Journals (Sweden)

    Ocaña K

    2015-11-01

    Full Text Available Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. Keywords: high-performance computing, genomic research, cloud computing, grid computing, cluster computing, parallel computing

  19. ISDN communication: Its workstation technology and application system

    Energy Technology Data Exchange (ETDEWEB)

    Sugimura, T; Ogiwara, Y; Saito, T [Hitachi, Ltd., Tokyo (Japan)

    1991-07-01

    This report describes technology for integrated services digital network (ISDN) which allows workstations to process multimedia data and application systems of advanced group teleworking which use such technology. Hitachi has developed workstations which are more powerful, have more functions, and have larger memory capacities. These factors allowed media which require high-speed processing of large quantities of voice and image data to be integrated into the world of conventional text data processing and communications. In addition, the application of group teleworking system has a large impact through the improvements in the office environment, the changes in the style of office work, and the appearance of new businesses. A prototype of this system was exhibited and demonstrated at TELECOM91. 1 ref., 4 figs., 2 tabs.

  20. Virtual interface environment workstations

    Science.gov (United States)

    Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.

  1. Effect of One Carpet Weaving Workstation on Upper Trapezius Fatigue

    Directory of Open Access Journals (Sweden)

    Neda Mahdavi

    2016-03-01

    Full Text Available Introduction: This study aimed to investigate the effect of carpet weaving at a proposed workstation on Upper Trapezius (UTr fatigue during a task cycle. Fatigue in the shoulder is one of the most important precursors for upper limb musculoskeletal disorders. One of the most prevalent musculoskeletal disorders between carpet weavers is disorder of the shoulder region. Methods: This cross-sectional study, included eight females and three males. During an 80-minute cycle of carpet weaving, Electromyography (EMG signals of right and left UTr were recorded by the surface EMG, continuously. After raw signals were processed, MPF and RMS were considered as EMG amplitude and frequency parameters. Time series model and JASA methods were used to assess and classify the EMG parameter changes during the working time. Results: According to the JASA method, 58%, 16%, 8% and 8% of the participants experienced fatigue, force increase, force decrease and recovery, respectively in the right UTr. Also, 50%, 25%, 8% and 16% of the participants experienced fatigue, force increase, force decrease and recovery, respectively in the left UTr. Conclusions: For the major portion of the weavers, dominant status in Left and right UTr was fatigue, at the proposed workstation during a carpet weaving task cycle. The results of the study provide detailed information for optimal design of workstations. Further studies should focus on fatigue in various muscles and time periods for designing an appropriate and ergonomics carpet weaving workstation

  2. Helical computed tomography and the workstation: introduction to a symbiosis

    International Nuclear Information System (INIS)

    Garcia-Santos, J.M.

    1997-01-01

    We do a brief introduction to the possibilities of an helical computed tomography system when it is associated with a powerful workstation. The fast and volumetric way of acquisition constitutes, basically, the main advantage of this sort of computed tomography. The anatomical and radio pathological study, in a workstation, of the acquired information (thanks to multiplanar and 3D reconstruction), increases significantly our capacity of analysis in each patient. Only the clinical and radiological experience will tell us which is the right place that this symbiosis occupies within our diagnosis tools. (Author) 11 refs

  3. 76 FR 10403 - Hewlett Packard (HP), Global Product Development, Engineering Workstation Refresh Team, Working...

    Science.gov (United States)

    2011-02-24

    ...), Global Product Development, Engineering Workstation Refresh Team, Working On-Site at General Motors..., Non-Information Technology Business Development Team and Engineering Application Support Team, working... Hewlett Packard, Global Product Development, Engineering Workstation Refresh Team, working on-site at...

  4. Parallel performance of the angular versus spatial domain decomposition for discrete ordinates transport methods

    International Nuclear Information System (INIS)

    Fischer, J.W.; Azmy, Y.Y.

    2003-01-01

    A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of

  5. Parallelized implicit propagators for the finite-difference Schrödinger equation

    Science.gov (United States)

    Parker, Jonathan; Taylor, K. T.

    1995-08-01

    We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.

  6. A nuclear power plant system engineering workstation

    International Nuclear Information System (INIS)

    Mason, J.H.; Crosby, J.W.

    1989-01-01

    System engineers offer an approach for effective technical support for operation and maintenance of nuclear power plants. System engineer groups are being set up by most utilities in the United States. Institute of Nuclear Power operations (INPO) and U.S. Nuclear Regulatory Commission (NRC) have endorsed the concept. The INPO Good Practice and a survey of system engineer programs in the southeastern United States provide descriptions of system engineering programs. The purpose of this paper is to describe a process for developing a design for a department-level information network of workstations for system engineering groups. The process includes the following: (1) application of a formal information engineering methodology, (2) analysis of system engineer functions and activities; (3) use of Electric Power Research Institute (EPRI) Plant Information Network (PIN) data; (4) application of the Information Engineering Workbench. The resulting design for this system engineer workstation can provide a reference for design of plant-specific systems

  7. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    Science.gov (United States)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon

  8. User interface on networked workstations for MFTF plasma diagnostic instruments

    International Nuclear Information System (INIS)

    Renbarger, V.L.; Balch, T.R.

    1985-01-01

    A network of Sun-2/170 workstations is used to provide an interface to the MFTF-B Plasma Diagnostics System at Lawrence Livermore National Laboratory. The Plasma Diagnostics System (PDS) is responsible for control of MFTF-B plasma diagnostic instrumentation. An EtherNet Local Area Network links the workstations to a central multiprocessing system which furnishes data processing, data storage and control services for PDS. These workstations permit a physicist to command data acquisition, data processing, instrument control, and display of results. The interface is implemented as a metaphorical desktop, which helps the operator form a mental model of how the system works. As on a real desktop, functions are provided by sheets of paper (windows on a CRT screen) called worksheets. The worksheets may be invoked by pop-up menus and may be manipulated with a mouse. These worksheets are actually tasks that communicate with other tasks running in the central computer system. By making entries in the appropriate worksheet, a physicist may specify data acquisition or processing, control a diagnostic, or view a result

  9. Diagnostic image workstations ofr PACS

    International Nuclear Information System (INIS)

    Meyer-Ebrecht, D.; Fasel, B.; Dahm, M.; Kaupp, A.; Schilling, R.

    1990-01-01

    Image workstations will be the 'window' to the complex infrastructure of PACS with its intertwined image modalities (image sources, image data bases and image processing devices) and data processing modalities (patient data bases, departmental and hospital information systems). They will serve for user-to-system dialogues, image display and local processing of data as well as images. Their hardware and software structures have to be optimized towards an efficient throughput and processing of image data. (author). 10 refs

  10. Next Genertation BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Bañas, Andrew Rafael; Palima, Darwin; Tauro, Sandeep

    We will outline the specs of our Biophotonics Workstation that can generate up to 100 reconfigurable laser-traps making 3D real-time optical manipulation of advanced structures, cells or tiny particles possible with the use of joysticks or gaming devices. Optically actuated nanoneedles may...... be functionalized or directly used to perforate targeted cells at specific locations or force the complete separation of dividing cells, among other functions that can be very useful for microbiologists or biomedical researchers....

  11. Assessment of a cooperative workstation.

    OpenAIRE

    Beuscart, R. J.; Molenda, S.; Souf, N.; Foucher, C.; Beuscart-Zephir, M. C.

    1996-01-01

    Groupware and new Information Technologies have now made it possible for people in different places to work together in synchronous cooperation. Very often, designers of this new type of software are not provided with a model of the common workspace, which is prejudicial to software development and its acceptance by potential users. The authors take the example of a task of medical co-diagnosis, using a multi-media communication workstation. Synchronous cooperative work is made possible by us...

  12. Comparison of computer workstation with film for detecting setup errors

    International Nuclear Information System (INIS)

    Fritsch, D.S.; Boxwala, A.A.; Raghavan, S.; Coffee, C.; Major, S.A.; Muller, K.E.; Chaney, E.L.

    1997-01-01

    Purpose/Objective: Workstations designed for portal image interpretation by radiation oncologists provide image displays and image processing and analysis tools that differ significantly compared with the standard clinical practice of inspecting portal films on a light box. An implied but unproved assumption associated with the clinical implementation of workstation technology is that patient care is improved, or at least not adversely affected. The purpose of this investigation was to conduct observer studies to test the hypothesis that radiation oncologists can detect setup errors using a workstation at least as accurately as when following standard clinical practice. Materials and Methods: A workstation, PortFolio, was designed for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools to enhance images; align cross-hairs, field edges, and anatomic structures on reference and acquired images; measure distances and angles; and view registered images superimposed on one another. In a well designed and carefully controlled observer study, nine radiation oncologists, including attendings and residents, used PortFolio to detect setup errors in realistic digitally reconstructed portal (DRPR) images computed from the NLM visible human data using a previously described approach † . Compared with actual portal images where absolute truth is ill defined or unknown, the DRPRs contained known translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Twenty DRPRs with randomly induced errors were computed for each site. The induced errors were constrained to a plane at the isocenter of the target volume and perpendicular to the central axis of the treatment beam. Images used in the study were also printed on film. Observers interpreted the film-based images using standard clinical practice. The images were reviewed in eight sessions. During each session five images were

  13. Viewport: An object-oriented approach to integrate workstation software for tile and stack mode display

    OpenAIRE

    Ghosh, Srinka; Andriole, Katherine P.; Avrin, David E.

    1997-01-01

    Diagnostic workstation design has migrated towards display presentation in one of two modes: tiled images or stacked images. It is our impression that the workstation setup or configuration in each of these two modes is rather distinct. We sought to establish a commonality to simplify software design, and to enable a single descriptor method to facilitate folder manager development of “hanging” protocols. All current workstation designs use a combination of “off-screen” and “on-screen” memory...

  14. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  15. Physics analysis workstation

    International Nuclear Information System (INIS)

    Johnstad, H.

    1989-06-01

    The Physics Analysis Workstation (PAW) is a high-level program providing data presentation and statistical or mathematical analysis. PAW has been developed at CERN as an instrument to assist physicists in the analysis and presentation of their data. The program is interfaced to a high level graphics package, based on basic underlying graphics. 3-D graphics capabilities are being implemented. The major objects in PAW are 1 or 2 dimensional binned event data with fixed number of entries per event, vectors, functions, graphics pictures, and macros. Command input is handled by an integrated user interface package, which allows for a variety of choices for input, either with typed commands, or in a tree structure menu driven mode. 6 refs., 1 fig

  16. An open architecture for medical image workstation

    Science.gov (United States)

    Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun

    2005-04-01

    Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.

  17. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  18. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER

  19. A Design Study Investigating Augmented Reality and Photograph Annotation in a Digitalized Grossing Workstation.

    Science.gov (United States)

    Chow, Joyce A; Törnros, Martin E; Waltersson, Marie; Richard, Helen; Kusoffsky, Madeleine; Lundström, Claes F; Kurti, Arianit

    2017-01-01

    Within digital pathology, digitalization of the grossing procedure has been relatively underexplored in comparison to digitalization of pathology slides. Our investigation focuses on the interaction design of an augmented reality gross pathology workstation and refining the interface so that information and visualizations are easily recorded and displayed in a thoughtful view. The work in this project occurred in two phases: the first phase focused on implementation of an augmented reality grossing workstation prototype while the second phase focused on the implementation of an incremental prototype in parallel with a deeper design study. Our research institute focused on an experimental and "designerly" approach to create a digital gross pathology prototype as opposed to focusing on developing a system for immediate clinical deployment. Evaluation has not been limited to user tests and interviews, but rather key insights were uncovered through design methods such as " rapid ethnography " and " conversation with materials ". We developed an augmented reality enhanced digital grossing station prototype to assist pathology technicians in capturing data during examination. The prototype uses a magnetically tracked scalpel to annotate planned cuts and dimensions onto photographs taken of the work surface. This article focuses on the use of qualitative design methods to evaluate and refine the prototype. Our aims were to build on the strengths of the prototype's technology, improve the ergonomics of the digital/physical workstation by considering numerous alternative design directions, and to consider the effects of digitalization on personnel and the pathology diagnostics information flow from a wider perspective. A proposed interface design allows the pathology technician to place images in relation to its orientation, annotate directly on the image, and create linked information. The augmented reality magnetically tracked scalpel reduces tool switching though

  20. Binary black holes on a budget: simulations using workstations

    International Nuclear Information System (INIS)

    Marronetti, Pedro; Tichy, Wolfgang; Bruegmann, Bernd; Gonzalez, Jose; Hannam, Mark; Husa, Sascha; Sperhake, Ulrich

    2007-01-01

    Binary black hole simulations have traditionally been computationally very expensive: current simulations are performed in supercomputers involving dozens if not hundreds of processors, thus systematic studies of the parameter space of binary black hole encounters still seem prohibitive with current technology. Here we show how the multi-layered refinement level code BAM can be used on dual processor workstations to simulate certain binary black hole systems. BAM, based on the moving punctures method, provides grid structures composed of boxes of increasing resolution near the centre of the grid. In the case of binaries, the highest resolution boxes are placed around each black hole and they track them in their orbits until the final merger when a single set of levels surrounds the black hole remnant. This is particularly useful when simulating spinning black holes since the gravitational fields gradients are larger. We present simulations of binaries with equal mass black holes with spins parallel to the binary axis and intrinsic magnitude of S/m 2 = 0.75. Our results compare favourably to those of previous simulations of this particular system. We show that the moving punctures method produces stable simulations at maximum spatial resolutions up to M/160 and for durations of up to the equivalent of 20 orbital periods

  1. Decomposition and parallelization strategies for solving large-scale MDO problems

    Energy Technology Data Exchange (ETDEWEB)

    Grauer, M.; Eschenauer, H.A. [Research Center for Multidisciplinary Analyses and Applied Structural Optimization, FOMAAS, Univ. of Siegen (Germany)

    2007-07-01

    During previous years, structural optimization has been recognized as a useful tool within the discriptiones of engineering and economics. However, the optimization of large-scale systems or structures is impeded by an immense solution effort. This was the reason to start a joint research and development (R and D) project between the Institute of Mechanics and Control Engineering and the Information and Decision Sciences Institute within the Research Center for Multidisciplinary Analyses and Applied Structural Optimization (FOMAAS) on cluster computing for parallel and distributed solution of multidisciplinary optimization (MDO) problems based on the OpTiX-Workbench. Here the focus of attention will be put on coarsegrained parallelization and its implementation on clusters of workstations. A further point of emphasis was laid on the development of a parallel decomposition strategy called PARDEC, for the solution of very complex optimization problems which cannot be solved efficiently by sequential integrated optimization. The use of the OptiX-Workbench together with the FEM ground water simulation system FEFLOW is shown for a special water management problem. (orig.)

  2. Treadmill workstations: the effects of walking while working on physical activity and work performance.

    Directory of Open Access Journals (Sweden)

    Avner Ben-Ner

    Full Text Available We conducted a 12-month-long experiment in a financial services company to study how the availability of treadmill workstations affects employees' physical activity and work performance. We enlisted sedentary volunteers, half of whom received treadmill workstations during the first two months of the study and the rest in the seventh month of the study. Participants could operate the treadmills at speeds of 0-2 mph and could use a standard chair-desk arrangement at will. (a Weekly online performance surveys were administered to participants and their supervisors, as well as to all other sedentary employees and their supervisors. Using within-person statistical analyses, we find that overall work performance, quality and quantity of performance, and interactions with coworkers improved as a result of adoption of treadmill workstations. (b Participants were outfitted with accelerometers at the start of the study. We find that daily total physical activity increased as a result of the adoption of treadmill workstations.

  3. Treadmill workstations: the effects of walking while working on physical activity and work performance.

    Science.gov (United States)

    Ben-Ner, Avner; Hamann, Darla J; Koepp, Gabriel; Manohar, Chimnay U; Levine, James

    2014-01-01

    We conducted a 12-month-long experiment in a financial services company to study how the availability of treadmill workstations affects employees' physical activity and work performance. We enlisted sedentary volunteers, half of whom received treadmill workstations during the first two months of the study and the rest in the seventh month of the study. Participants could operate the treadmills at speeds of 0-2 mph and could use a standard chair-desk arrangement at will. (a) Weekly online performance surveys were administered to participants and their supervisors, as well as to all other sedentary employees and their supervisors. Using within-person statistical analyses, we find that overall work performance, quality and quantity of performance, and interactions with coworkers improved as a result of adoption of treadmill workstations. (b) Participants were outfitted with accelerometers at the start of the study. We find that daily total physical activity increased as a result of the adoption of treadmill workstations.

  4. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    Science.gov (United States)

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  5. Physics and detector simulation facility Type O workstation specifications

    International Nuclear Information System (INIS)

    Chartrand, G.; Cormell, L.R.; Hahn, R.; Jacobson, D.; Johnstad, H.; Leibold, P.; Marquez, M.; Ramsey, B.; Roberts, L.; Scipioni, B.; Yost, G.P.

    1990-11-01

    This document specifies the requirements for the front-end network of workstations of a distributed computing facility. This facility will be needed to perform the physics and detector simulations for the design of Superconducting Super Collider (SSC) detectors, and other computations in support of physics and detector needs. A detailed description of the computer simulation facility is given in the overall system specification document. This document provides revised subsystem specifications for the network of monitor-less Type 0 workstations. The requirements specified in this document supersede the requirements given. In Section 2 a brief functional description of the facility and its use are provided. The list of detailed specifications (vendor requirements) is given in Section 3 and the qualifying requirements (benchmarks) are described in Section 4

  6. BioPhotonics Workstation: a university tech transfer challenge

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Bañas, Andrew Rafael; Tauro, Sandeep

    2011-01-01

    Conventional optical trapping or tweezing is often limited in the achievable trapping range because of high numerical aperture and imaging requirements. To circumvent this, we are developing a next generation BioPhotonics Workstation platform that supports extension modules through a long working...

  7. Ergonomics in the computer workstation | Karoney | East African ...

    African Journals Online (AJOL)

    Background: Awareness of effects of long term use of computer and application of ergonomics in the computer workstation is important for preventing musculoskeletal disorders, eyestrain and psychosocial effects. Objectives: To determine the awareness of ºphysical and psychological effects of prolonged computer usage ...

  8. Internet2-based 3D PET image reconstruction using a PC cluster

    International Nuclear Information System (INIS)

    Shattuck, D.W.; Rapela, J.; Asma, E.; Leahy, R.M.; Chatzioannou, A.; Qi, J.

    2002-01-01

    We describe an approach to fast iterative reconstruction from fully three-dimensional (3D) PET data using a network of PentiumIII PCs configured as a Beowulf cluster. To facilitate the use of this system, we have developed a browser-based interface using Java. The system compresses PET data on the user's machine, sends these data over a network, and instructs the PC cluster to reconstruct the image. The cluster implements a parallelized version of our preconditioned conjugate gradient method for fully 3D MAP image reconstruction. We report on the speed-up factors using the Beowulf approach and the impacts of communication latencies in the local cluster network and the network connection between the user's machine and our PC cluster. (author)

  9. Implementation of a high-resolution workstation for primary diagnosis of projection radiography images

    Science.gov (United States)

    Good, Walter F.; Herron, John M.; Maitz, Glenn S.; Gur, David; Miller, Stephen L.; Straub, William H.; Fuhrman, Carl R.

    1990-08-01

    We designed and implemented a high-resolution video workstation as the central hardware component in a comprehensive multi-project program comparing the use of digital and film modalities. The workstation utilizes a 1.8 GByte real-time disk (RCI) capable of storing 400 full-resolution images and two Tektronix (GMA251) display controllers with 19" monitors (GMA2O2). The display is configured in a portrait format with a resolution of 1536 x 2048 x 8 bit, and operates at 75 Hz in a noninterlaced mode. Transmission of data through a 12 to 8 bit lookup table into the display controllers occurs at 20 MBytes/second (.35 seconds per image). The workstation allows easy use of brightness (level) and contrast (window) to be manipulated with a trackball, and various processing options can be selected using push buttons. Display of any of the 400 images is also performed at 20MBytes/sec (.35 sec/image). A separate text display provides for the automatic display of patient history data and for a scoring form through which readers can interact with the system by means of a computer mouse. In addition, the workstation provides for the randomization of cases and for the immediate entry of diagnostic responses into a master database. Over the past year this workstation has been used for over 10,000 readings in diagnostic studies related to 1) image resolution; 2) film vs. soft display; 3) incorporation of patient history data into the reading process; and 4) usefulness of image processing.

  10. Parallel computation of automatic differentiation applied to magnetic field calculations

    International Nuclear Information System (INIS)

    Hinkins, R.L.; Lawrence Berkeley Lab., CA

    1994-09-01

    The author presents a parallelization of an accelerator physics application to simulate magnetic field in three dimensions. The problem involves the evaluation of high order derivatives with respect to two variables of a multivariate function. Automatic differentiation software had been used with some success, but the computation time was prohibitive. The implementation runs on several platforms, including a network of workstations using PVM, a MasPar using MPFortran, and a CM-5 using CMFortran. A careful examination of the code led to several optimizations that improved its serial performance by a factor of 8.7. The parallelization produced further improvements, especially on the MasPar with a speedup factor of 620. As a result a problem that took six days on a SPARC 10/41 now runs in minutes on the MasPar, making it feasible for physicists at Lawrence Berkeley Laboratory to simulate larger magnets

  11. The effect of dynamic workstations on the performance of various computer and office-based tasks

    NARCIS (Netherlands)

    Burford, E.M.; Botter, J.; Commissaris, D.; Könemann, R.; Hiemstra-Van Mastrigt, S.; Ellegast, R.P.

    2013-01-01

    The effect of different workstations, conventional and dynamic, on different types of performance measures for several different office and computer based task was investigated in this research paper. The two dynamic workstations assessed were the Lifespan Treadmill Desk and the RightAngle

  12. Users Guide to VSMOKE-GIS for Workstations

    Science.gov (United States)

    Mary F. Harms; Leonidas G. Lavdas

    1997-01-01

    VSMOKE-GIS was developed to help prescribed burners in the national forests of the Southeastern United States visualize smoke dispersion and to plan prescribed burns. Developed for use on workstations, this decision-support system consists of a graphical user interface, written in Arc/Info Arc Macro Language, and is linked to a FORTRAN computer program. VSMOKE-GIS...

  13. Post-deployment usability evaluation of a radiology workstation

    NARCIS (Netherlands)

    Jorritsma, Wiard; Cnossen, Fokie; Dierckx, Rudi; Oudkerk, Matthijs; van Ooijen, Peter

    2015-01-01

    Objective To evaluate the usability of a radiology workstation after deployment in a hospital. Significance In radiology, it is difficult to perform valid pre-deployment usability evaluations due to the heterogeneity of the user group, the complexity of the radiological workflow, and the complexity

  14. Interpretation of digital breast tomosynthesis: preliminary study on comparison with picture archiving and communication system (PACS) and dedicated workstation.

    Science.gov (United States)

    Kim, Young Seon; Chang, Jung Min; Yi, Ann; Shin, Sung Ui; Lee, Myung Eun; Kim, Won Hwa; Cho, Nariya; Moon, Woo Kyung

    2017-08-01

    To compare the diagnostic accuracy and efficiency in the interpretation of digital breast tomosynthesis (DBT) images using a picture archiving and communication system (PACS) and a dedicated workstation. 97 DBT images obtained for screening or diagnostic purposes were stored in both a workstation and a PACS and evaluated in combination with digital mammography by three independent radiologists retrospectively. Breast Imaging-Reporting and Data System final assessments and likelihood of malignancy (%) were assigned and the interpretation time when using the workstation and PACS was recorded. Receiver operating characteristic curve analysis, sensitivities and specificities were compared with histopathological examination and follow-up data as a reference standard. Area under the receiver operating characteristic curve values for cancer detection (0.839 vs 0.815, p = 0.6375) and sensitivity (81.8% vs 75.8%, p = 0.2188) showed no statistically significant differences between the workstation and PACS. However, specificity was significantly higher when analysing on the workstation than when using PACS (83.7% vs 76.9%, p = 0.009). When evaluating DBT images using PACS, only one case was deemed necessary to be reanalysed using the workstation. The mean time to interpret DBT images on PACS (1.68 min/case) was significantly longer than that to interpret on the workstation (1.35 min/case) (p < 0.0001). Interpretation of DBT images using PACS showed comparable diagnostic performance to a dedicated workstation, even though it required a longer reading time. Advances in knowledge: Interpretation of DBT images using PACS is an alternative to evaluate the images when a dedicated workstation is not available.

  15. Graphics metafile interface to ARAC emergency response models for remote workstation study

    International Nuclear Information System (INIS)

    Lawver, B.S.

    1985-01-01

    The Department of Energy's Atmospheric Response Advisory Capability models are executed on computers at a central computer center with the output distributed to accident advisors in the field. The output of these atmospheric diffusion models are generated as contoured isopleths of concentrations. When these isopleths are overlayed with local geography, they become a useful tool to the accident site advisor. ARAC has developed a workstation that is located at potential accident sites. The workstation allows the accident advisor to view color plots of the model results, scale those plots and print black and white hardcopy of the model results. The graphics metafile, also known as Virtual Device Metafile (VDM) allows the models to generate a single device independent output file that is partitioned into geography, isoopleths and labeling information. The metafile is a very compact data storage technique that is output device independent. The metafile frees the model from either generating output for all known graphic devices or requiring the model to be rerun for additional graphic devices. With the partitioned metafile ARAC can transmit to the remote workstation the isopleths and labeling for each model. The geography database may not change and can be transmitted only when needed. This paper describes the important features of the remote workstation and how these features are supported by the device independent graphics metafile

  16. Ergonomics standards and guidelines for computer workstation design and the impact on users' health - a review.

    Science.gov (United States)

    Woo, E H C; White, P; Lai, C W K

    2016-03-01

    This paper presents an overview of global ergonomics standards and guidelines for design of computer workstations, with particular focus on their inconsistency and associated health risk impact. Overall, considerable disagreements were found in the design specifications of computer workstations globally, particularly in relation to the results from previous ergonomics research and the outcomes from current ergonomics standards and guidelines. To cope with the rapid advancement in computer technology, this article provides justifications and suggestions for modifications in the current ergonomics standards and guidelines for the design of computer workstations. Practitioner Summary: A research gap exists in ergonomics standards and guidelines for computer workstations. We explore the validity and generalisability of ergonomics recommendations by comparing previous ergonomics research through to recommendations and outcomes from current ergonomics standards and guidelines.

  17. Workplace sitting and height-adjustable workstations: a randomized controlled trial.

    Science.gov (United States)

    Neuhaus, Maike; Healy, Genevieve N; Dunstan, David W; Owen, Neville; Eakin, Elizabeth G

    2014-01-01

    Desk-based office employees sit for most of their working day. To address excessive sitting as a newly identified health risk, best practice frameworks suggest a multi-component approach. However, these approaches are resource intensive and knowledge about their impact is limited. To compare the efficacy of a multi-component intervention to reduce workplace sitting time, to a height-adjustable workstations-only intervention, and to a comparison group (usual practice). Three-arm quasi-randomized controlled trial in three separate administrative units of the University of Queensland, Brisbane, Australia. Data were collected between January and June 2012 and analyzed the same year. Desk-based office workers aged 20-65 (multi-component intervention, n=16; workstations-only, n=14; comparison, n=14). The multi-component intervention comprised installation of height-adjustable workstations and organizational-level (management consultation, staff education, manager e-mails to staff) and individual-level (face-to-face coaching, telephone support) elements. Workplace sitting time (minutes/8-hour workday) assessed objectively via activPAL3 devices worn for 7 days at baseline and 3 months (end-of-intervention). At baseline, the mean proportion of workplace sitting time was approximately 77% across all groups (multi-component group 366 minutes/8 hours [SD=49]; workstations-only group 373 minutes/8 hours [SD=36], comparison 365 minutes/8 hours [SD=54]). Following intervention and relative to the comparison group, workplace sitting time in the multi-component group was reduced by 89 minutes/8-hour workday (95% CI=-130, -47 minutes; pworkplace sitting. These findings may have important practical and financial implications for workplaces targeting sitting time reductions. Australian New Zealand Clinical Trials Registry 00363297. © 2013 American Journal of Preventive Medicine Published by American Journal of Preventive Medicine All rights reserved.

  18. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Eberhard, Peter [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: eberhard@ltm.uni-erlangen.de; Dignath, Florian [University of Stuttgart, Institute B of Mechanics (Germany)], E-mail: fd@mechb.uni-stuttgart.de; Kuebler, Lars [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: kuebler@ltm.uni-erlangen.de

    2003-03-15

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized.

  19. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    International Nuclear Information System (INIS)

    Eberhard, Peter; Dignath, Florian; Kuebler, Lars

    2003-01-01

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized

  20. Stampi: a message passing library for distributed parallel computing. User's guide, second edition

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Koide, Hiroshi; Takemiya, Hiroshi

    2000-02-01

    A new message passing library, Stampi, has been developed to realize a computation with different kind of parallel computers arbitrarily and making MPI (Message Passing Interface) as an unique interface for communication. Stampi is based on the MPI2 specification, and it realizes dynamic process creation to different machines and communication between spawned one within the scope of MPI semantics. Main features of Stampi are summarized as follows: (i) an automatic switch function between external- and internal communications, (ii) a message routing/relaying with a routing module, (iii) a dynamic process creation, (iv) a support of two types of connection, Master/Slave and Client/Server, (v) a support of a communication with Java applets. Indeed vendors implemented MPI libraries as a closed system in one parallel machine or their systems, and did not support both functions; process creation and communication to external machines. Stampi supports both functions and enables us distributed parallel computing. Currently Stampi has been implemented on COMPACS (COMplex PArallel Computer System) introduced in CCSE, five parallel computers and one graphic workstation, moreover on eight kinds of parallel machines, totally fourteen systems. Stampi provides us MPI communication functionality on them. This report describes mainly the usage of Stampi. (author)

  1. Comparison of the temperature and humidity in the anesthetic breathing circuit among different anesthetic workstations: Updated guidelines for reporting parallel group randomized trials.

    Science.gov (United States)

    Choi, Yoon Ji; Min, Sam Hong; Park, Jeong Jun; Cho, Jang Eun; Yoon, Seung Zhoo; Yoon, Suk Min

    2017-06-01

    For patients undergoing general anesthesia, adequate warming and humidification of the inspired gases is very important. The aim of this study was to evaluate the differences in the heat and moisture content of the inspired gases with low-flow anesthesia using 4 different anesthesia machines. The patients were divided into 11 groups according to the anesthesia machine used (Ohmeda, Excel; Avance; Dräger, Cato; and Primus) and the fresh gas flow (FGF) rate (0.5, 1, and 4 L/min). The temperature and absolute humidity of the inspired gas in the inspiratory limbs were measured at 5, 10, 15, 30, 45, 60, 75, 90, 105, and 120 minutes in 9 patients scheduled for total thyroidectomy or cervical spine operation in each group. The anesthesia machines of Excel, Avance, Cato, and Primus did not show statistically significant changes in the inspired gas temperatures over time within each group with various FGFs. They, however, showed statistically significant changes in the absolute humidity of the inspired gas over time within each group with low FGF anesthesia (P humidity of the inspired gas over time within each group with an FGF of 4 L/min (P humidities of the inspired gas for all anesthesia machines were lower than the recommended values. There were statistical differences in the provision of humidity among different anesthesia workstations. The Cato and Primus workstations were superior to Excel and Avance. However, even these were unsatisfactory in humans. Therefore, additional devices that provide inspired gases with adequate heat and humidity are needed for those undergoing general anesthetic procedures.

  2. A real-time monitoring/emergency response modeling workstation for a tritium facility

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sims, J.M.; Baskett, R.L.

    1993-07-01

    At Lawrence Livermore National Laboratory (LLNL) we developed a real-time system to monitor two stacks on our tritium handling facility. The monitors transmit the stack data to a workstation which computes a 3D numerical model of atmospheric dispersion. The workstation also collects surface and upper air data from meteorological towers and a sodar. The complex meteorological and terrain setting in the Livermore Valley demands more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion than afforded by Gaussian models. We experience both mountain valley and sea breeze flows. To address these complexities, we have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on the workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy's Atmospheric Release Advisory Capability (ARAC[1,2]) project

  3. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    Science.gov (United States)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  4. Achieving high performance in numerical computations on RISC workstations and parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  5. Criticality codes migration to workstations at the Hanford site

    International Nuclear Information System (INIS)

    Miller, E.M.

    1993-01-01

    Westinghouse Hanford Company, Hanford Site Operations contractor, Richland, Washington, currently runs criticality codes on the Cray X-MP EA/232 computer but has recommended that US Department of Energy DOE-Richland replace the Cray with more economical workstations

  6. Workstation Table Engineering Model Design, Development, Fabrication, and Testing

    Science.gov (United States)

    2012-05-01

    This research effort is focused on providing a workstation table design that will reduce the risk of occupant injuries due to secondary impacts and to compartmentalize the occupants to prevent impacts with other objects and/or passengers seated acros...

  7. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  8. Studies on radio-diagnosis workstations

    International Nuclear Information System (INIS)

    Niguet, A.

    2008-01-01

    Radio-diagnosis ranges from mammography to interventional radiology, and represents a great majority of medical examinations, and is therefore the main source of exposure for the population. The author gives an overview of methods for workstation assessment, mainly based on the dose-area product. She indicates the factors affecting the radiation quantity, and evokes the influence of the type of examination. Measurements enable workers to be classified, an adapted dosimetry follow-on to be implemented, working areas to be delimited, collective and individual protections to be implemented, and recommendations to be drafted. Results obtained on a cardiologist are presented

  9. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    Science.gov (United States)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  10. A design study investigating augmented reality and photograph annotation in a digitalized grossing workstation

    Directory of Open Access Journals (Sweden)

    Joyce A Chow

    2017-01-01

    Full Text Available Context: Within digital pathology, digitalization of the grossing procedure has been relatively underexplored in comparison to digitalization of pathology slides. Aims: Our investigation focuses on the interaction design of an augmented reality gross pathology workstation and refining the interface so that information and visualizations are easily recorded and displayed in a thoughtful view. Settings and Design: The work in this project occurred in two phases: the first phase focused on implementation of an augmented reality grossing workstation prototype while the second phase focused on the implementation of an incremental prototype in parallel with a deeper design study. Subjects and Methods: Our research institute focused on an experimental and “designerly” approach to create a digital gross pathology prototype as opposed to focusing on developing a system for immediate clinical deployment. Statistical Analysis Used: Evaluation has not been limited to user tests and interviews, but rather key insights were uncovered through design methods such as “rapid ethnography” and “conversation with materials”. Results: We developed an augmented reality enhanced digital grossing station prototype to assist pathology technicians in capturing data during examination. The prototype uses a magnetically tracked scalpel to annotate planned cuts and dimensions onto photographs taken of the work surface. This article focuses on the use of qualitative design methods to evaluate and refine the prototype. Our aims were to build on the strengths of the prototype's technology, improve the ergonomics of the digital/physical workstation by considering numerous alternative design directions, and to consider the effects of digitalization on personnel and the pathology diagnostics information flow from a wider perspective. A proposed interface design allows the pathology technician to place images in relation to its orientation, annotate directly on the

  11. The integrated workstation: A common, consistent link between nuclear plant personnel and plant information and computerized resources

    International Nuclear Information System (INIS)

    Wood, R.T.; Knee, H.E.; Mullens, J.A.; Munro, J.K. Jr.; Swail, B.K.; Tapp, P.A.

    1993-01-01

    The increasing use of computer technology in the US nuclear power industry has greatly expanded the capability to obtain, analyze, and present data about the plant to station personnel. Data concerning a power plant's design, configuration, operational and maintenance histories, and current status, and the information that can be derived from them, provide the link between the plant and plant staff. It is through this information bridge that operations, maintenance and engineering personnel understand and manage plant performance. However, it is necessary to transform the vast quantity of data available from various computer systems and across communications networks into clear, concise, and coherent information. In addition, it is important to organize this information into a consolidated, structured form within an integrated environment so that various users throughout the plant have ready access at their local station to knowledge necessary for their tasks. Thus, integrated workstations are needed to provide the inquired information and proper software tools, in a manner that can be easily understood and used, to the proper users throughout the plant. An effort is underway at the Oak Ridge National Laboratory to address this need by developing Integrated Workstation functional requirements and implementing a limited-scale prototype demonstration. The integrated Workstation requirements will define a flexible, expandable computer environment that permits a tailored implementation of workstation capabilities and facilitates future upgrades to add enhanced applications. The functionality to be supported by the integrated workstation and inherent capabilities to be provided by the workstation environment win be described. In addition, general technology areas which are to be addressed in the Integrated Workstation functional requirements will be discussed

  12. The driver workstation in commercial vehicles; Ergonomie und Design von Fahrerarbeitsplaetzen in Nutzfahrzeugen

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, W. [HAW-Hamburg (Germany)

    2003-07-01

    Nowadays, ergonomics and design are quality factors and indispensable elements of commercial vehicle design and development. Whereas a vehicle's appearance, i.e. its outside design, produces fascination and image, the design of its passenger cell focuses entirely on drivers and their tasks. Today, passenger-cell design and the ergonomics of driver workstations in commercial vehicles are clearly becoming more and more important. This article concentrates above all on defining commercial vehicle drivers, which, within the scope of research projects on coach-driver workstations, has provided new insight into the design of driver workstations. In light of the deficits determined, the research project mainly focused on designing driver workstations which were in line with the latest findings in ergonomics and human engineering. References to the methodology of driver-workstation optimization seems important in this context. The afore-mentioned innovations in the passenger cells of commercial vehicles will be explained and described by means of topical and practical examples. (orig.) [German] Ergonomie und Design sind heute Qualitaetsfaktoren und unverzichtbarer Bestandteil bei der Entwicklung von Nutzfahrzeugen. Erzeugt das Erscheinungsbild, die Aussengestaltung des Fahrzeugs, die Faszination und das Image, so ist die Innengestaltung weitgehend ganz auf die Bedienpersonen und ihre Arbeitsaufgaben bezogen. Die Innenraumgestaltung und die Ergonomie von Fahrerarbeitsplaetzen in Nutzfahrzeugen sind heute in einer Phase der deutlichen Aufwertung zu sehen. Im Beitrag wird besonders auf die Definition der Bedienpersonen fuer Nutzfahrzeuge eingegangen, die im Rahmen des Forschungsprojekts Fahrerarbeitsplatz im Reisebus zu neuen Erkenntnissen bei der Auslegung von Arbeitsplaetzen fuehrte. Gemaess der ermittelten Defizite konzentriert sich die Studie im Kern auf das Gestaltungskonzept des Fahrerarbeitsplatzes nach ergonomischen und arbeitswissenschaftlichen Erkenntnissen

  13. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  14. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    Science.gov (United States)

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  15. A user interface on networked workstations for MFTF-B plasma diagnostic instruments

    International Nuclear Information System (INIS)

    Balch, T.R.; Renbarger, V.L.

    1986-01-01

    A network of Sun-2/170 workstations is used to provide an interface to the MFTF-B Plasma Diagnostics System at Lawrence Livermore National Laboratory. The Plasma Diagnostics System (PDS) is responsible for control of MFTF-B plasma diagnostic instrumentation. An EtherNet Local Area Network links the workstations to a central multiprocessing system which furnishes data processing, data storage and control services for PDS. These workstations permit a physicist to command data acquisition, data processing, instrument control, and display of results. The interface is implemented as a metaphorical desktop, which helps the operator form a mental model of how the system works. As on a real desktop, functions are provided by sheets of paper (windows on a CRT screen) called worksheets. The worksheets may be invoked by pop-up menus and may be manipulated with a mouse. These worksheets are actually tasks that communicate with other tasks running in the central computer system. By making entries in the appropriate worksheet, a physicist may specify data acquisition or processing, control a diagnostic, or view a result

  16. Optimizing the pathology workstation "cockpit": Challenges and solutions

    Directory of Open Access Journals (Sweden)

    Elizabeth A Krupinski

    2010-01-01

    Full Text Available The 21 st century has brought numerous changes to the clinical reading (i.e., image or virtual pathology slide interpretation environment of pathologists and it will continue to change even more dramatically as information and communication technologies (ICTs become more widespread in the integrated healthcare enterprise. The extent to which these changes impact the practicing pathologist differ as a function of the technology under consideration, but digital "virtual slides" and the viewing of images on computer monitors instead of glass slides through a microscope clearly represents a significant change in the way that pathologists extract information from these images and render diagnostic decisions. One of the major challenges facing pathologists in this new era is how to best optimize the pathology workstation, the reading environment and the new and varied types of information available in order to ensure efficient and accurate processing of this information. Although workstations can be stand-alone units with images imported via external storage devices, this scenario is becoming less common as pathology departments connect to information highways within their hospitals and to external sites. Picture Archiving and Communications systems are no longer confined to radiology departments but are serving the entire integrated healthcare enterprise, including pathology. In radiology, the workstation is often referred to as the "cockpit" with a "digital dashboard" and the reading room as the "control room." Although pathology has yet to "go digital" to the extent that radiology has, lessons derived from radiology reading "cockpits" can be quite valuable in setting up the digital pathology reading room. In this article, we describe the concept of the digital dashboard and provide some recent examples of informatics-based applications that have been shown to improve the workflow and quality in digital reading environments.

  17. The image-interpretation-workstation of the future: lessons learned

    Science.gov (United States)

    Maier, S.; van de Camp, F.; Hafermann, J.; Wagner, B.; Peinsipp-Byma, E.; Beyerer, J.

    2017-05-01

    In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.

  18. An efficient, interactive, and parallel system for biomedical volume analysis on a standard workstation

    International Nuclear Information System (INIS)

    Rebuffel, V.; Gonon, G.

    1992-01-01

    A software package is presented that can be employed for any 3D imaging modalities: X-ray tomography, emission tomography, magnetic resonance imaging. This system uses a hierarchical data structure, named Octree, that naturally allows a multi-resolution approach. The well-known problems of such an indeterministic representation, especially the neighbor finding, has been solved. Several algorithms of volume processing have been developed, using these techniques and an optimal data storage for the Octree. A parallel implementation was chosen that is compatible with the constraints of the Octree base and the various algorithms. (authors) 4 refs., 3 figs., 1 tab

  19. Utilization of a multimedia PACS workstation for surgical planning of epilepsy

    Science.gov (United States)

    Soo Hoo, Kent; Wong, Stephen T.; Hawkins, Randall A.; Knowlton, Robert C.; Laxer, Kenneth D.; Rowley, Howard A.

    1997-05-01

    Surgical treatment of temporal lobe epilepsy requires the localization of the epileptogenic zone for surgical resection. Currently, clinicians utilize electroencephalography, various neuroimaging modalities, and psychological tests together to determine the location of this zone. We investigate how a multimedia neuroimaging workstation built on top of the UCSF Picture Archiving and Communication System can be used to aid surgical planning of epilepsy and related brain diseases. This usage demonstrates the ability of the workstation to retrieve image and textural data from PACS and other image sources, register multimodality images, visualize and render 3D data sets, analyze images, generate new image and text data from the analysis, and organize all data in a relational database management system.

  20. An approach to develop a PSA workstation in KAERI

    International Nuclear Information System (INIS)

    Kim, T. W.; Han, S. H.; Park, C. K.

    1995-01-01

    This paper describes three kinds of efforts for the development of PSA workstation in KAERI; Development of a PSA tool, KIRAP, Reliability Database Development, Living PSA tool development. Korea has 9 nuclear power plants (NPPs) in operation and 9 NPPs under design or construction. For the NPPs recently constructed or designed, the probabilistic safety assessments (PSAs) have been performed by the Government requirements. For these PSAs, the MSDOS version of KIRAP has been used. For the consistent data management and the easiness of information handling needed in PSA, APSA workstation, KIRAP-Win is under development under Windows environment. For the reliability database on component failure rate, human error rate, and common cause failure rate, data used in international PSA or reliability data handbook are collected and processed to use in Korean new plants' PSAs. Finally, an effort for the development of a living PSA tool in KAERI based on dynamic PSA concept is described

  1. MC++: A parallel, portable, Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms

  2. [Influence of different lighting levels at workstations with video display terminals on operators' work efficiency].

    Science.gov (United States)

    Janosik, Elzbieta; Grzesik, Jan

    2003-01-01

    The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.

  3. A Parallel Processing Algorithm for Remote Sensing Classification

    Science.gov (United States)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  4. Stereotactic biopsy aided by a computer graphics workstation: experience with 200 consecutive cases.

    Science.gov (United States)

    Ulm, A J; Bova, F J; Friedman, W A

    2001-12-01

    The advent of modern computer technology has made it possible to examine not just the target point, but the entire trajectory in planning for stereotactic biopsies. Two hundred consecutive biopsies were performed by one surgeon, utilizing a computer graphics workstation. The target point, entry point, and complete trajectory were carefully scrutinized and adjusted to minimize potential complications. Pathologically abnormal tissue was obtained in 197 cases (98.5%). There was no mortality in this series. Symptomatic hemorrhages occurred in 4 cases (2%). Computer graphics workstations facilitate safe and effective biopsies in virtually any brain area.

  5. Implementation of Active Workstations in University Libraries—A Comparison of Portable Pedal Exercise Machines and Standing Desks

    Directory of Open Access Journals (Sweden)

    Camille Bastien Tardif

    2018-06-01

    Full Text Available Sedentary behaviors are an important issue worldwide, as prolonged sitting time has been associated with health problems. Recently, active workstations have been developed as a strategy to counteract sedentary behaviors. The present study examined the rationale and perceptions of university students’ and staff following their first use of an active workstation in library settings. Ninety-nine volunteers completed a self-administered questionnaire after using a portable pedal exercise machine (PPEM or a standing desk (SD. Computer tasks were performed on the SD (p = 0.001 and paperwork tasks on a PPEM (p = 0.037 to a larger extent. Men preferred the SD and women chose the PPEM (p = 0.037. The appreciation of the PPEM was revealed to be higher than for the SD, due to its higher scores for effective, useful, functional, convenient, and comfortable dimensions. Younger participants (<25 years of age found the active workstation more pleasant to use than older participants, and participants who spent between 4 to 8 h per day in a seated position found active workstations were more effective and convenient than participants sitting fewer than 4 h per day. The results of this study are a preliminary step to better understanding the feasibility and acceptability of active workstations on university campuses.

  6. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  7. Viewport: an object-oriented approach to integrate workstation software for tile and stack mode display.

    Science.gov (United States)

    Ghosh, S; Andriole, K P; Avrin, D E

    1997-08-01

    Diagnostic workstation design has migrated towards display presentation in one of two modes: tiled images or stacked images. It is our impression that the workstation setup or configuration in each of these two modes is rather distinct. We sought to establish a commonality to simplify software design, and to enable a single descriptor method to facilitate folder manager development of "hanging" protocols. All current workstation designs use a combination of "off-screen" and "on-screen" memory whether or not they use a dedicated display subsystem, or merely a video board. Most diagnostic workstations also have two or more monitors. Our central concept is that of a "logical" viewport that can be smaller than, the same size as, or larger than a single monitor. Each port "views" an image data sequence loaded into offscreen memory. Each viewport can display one or more images in sequence in a one-on-one or traditionally tiled presentation. Viewports can be assigned to the available monitor "real estate" in any manner that fits. For example, a single sequence computed tomography (CT) study could be displayed across all monitors in a tiled appearance by assigning a single large viewport to the monitors. At the other extreme, a multisequence magnetic resonance (MR) study could be compared with a similar previous study by assigning four viewports to each monitor, single image display per viewport, and assigning four of the sequences of the current study to the left monitor viewports, and four of the earlier study to the right monitor viewports. Ergonomic controls activate scrolling through the off-screen image sequence data. Workstation folder manager hanging protocols could then specify viewports, number of images per viewport, and the automatic assignment of appropriately named sequences of current and previous studies to the viewports on a radiologist-specific basis. Furthermore, software development is simplified by common base objects and methods of the tile and stack

  8. Installation of MCNP on 64-bit parallel computers

    International Nuclear Information System (INIS)

    Meginnis, A.B.; Hendricks, J.S.; McKinney, G.W.

    1995-01-01

    The Monte Carlo radiation transport code MCNP has been successfully ported to two 64-bit workstations, the SGI and DEC Alpha. We found the biggest problem for installation on these machines to be Fortran and C mismatches in argument passing. Correction of these mismatches enabled, for the first time, dynamic memory allocation on 64-bit workstations. Although the 64-bit hardware is faster because 8-bytes are processed at a time rather than 4-bytes, we found no speed advantage in true 64-bit coding versus implicit double precision when porting an existing code to the 64-bit workstation architecture. We did find that PVM multiasking is very successful and represents a significant performance enhancement for scientific workstations

  9. Effect of Active Workstation on Energy Expenditure and Job Performance: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Cao, Chunmei; Liu, Yu; Zhu, Weimo; Ma, Jiangjun

    2016-05-01

    Recently developed active workstation could become a potential means for worksite physical activity and wellness promotion. The aim of this review was to quantitatively examine the effectiveness of active workstation in energy expenditure and job performance. The literature search was conducted in 6 databases (PubMed, SPORTDiscuss, Web of Science, ProQuest, ScienceDirect, and Scopuse) for articles published up to February 2014, from which a systematic review and meta-analysis was conducted. The cumulative analysis for EE showed there was significant increase in EE using active workstation [mean effect size (MES): 1.47; 95% confidence interval (CI): 1.22 to 1.72, P job performance indicated 2 findings: (1) active workstation did not affect selective attention, processing speed, speech quality, reading comprehension, interpretation and accuracy of transcription; and (2) it could decrease the efficiency of typing speed (MES: -0.55; CI: -0.88 to -0.21, P job performance were significantly lower, others were not. As a result there was little effect on real-life work productivity if we made a good arrangement of job tasks.

  10. Fast implementations of 3D PET reconstruction using vector and parallel programming techniques

    International Nuclear Information System (INIS)

    Guerrero, T.M.; Cherry, S.R.; Dahlbom, M.; Ricci, A.R.; Hoffman, E.J.

    1993-01-01

    Computationally intensive techniques that offer potential clinical use have arisen in nuclear medicine. Examples include iterative reconstruction, 3D PET data acquisition and reconstruction, and 3D image volume manipulation including image registration. One obstacle in achieving clinical acceptance of these techniques is the computational time required. This study focuses on methods to reduce the computation time for 3D PET reconstruction through the use of fast computer hardware, vector and parallel programming techniques, and algorithm optimization. The strengths and weaknesses of i860 microprocessor based workstation accelerator boards are investigated in implementations of 3D PET reconstruction

  11. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets.

    Science.gov (United States)

    Shrimankar, D D; Sathe, S R

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.

  12. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets

    Science.gov (United States)

    Shrimankar, D. D.; Sathe, S. R.

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868

  13. A comparison between digital images viewed on a picture archiving and communication system diagnostic workstation and on a PC-based remote viewing system by emergency physicians.

    Science.gov (United States)

    Parasyn, A; Hanson, R M; Peat, J K; De Silva, M

    1998-02-01

    Picture Archiving and Communication Systems (PACS) make possible the viewing of radiographic images on computer workstations located where clinical care is delivered. By the nature of their work this feature is particularly useful for emergency physicians who view radiographic studies for information and use them to explain results to patients and their families. However, the high cost of PACS diagnostic workstations with fuller functionality places limits on the number of and therefore the accessibility to workstations in the emergency department. This study was undertaken to establish how well less expensive personal computer-based workstations would work to support these needs of emergency physicians. The study compared the outcome of observations by 5 emergency physicians on a series of radiographic studies containing subtle abnormalities displayed on both a PACS diagnostic workstation and on a PC-based workstation. The 73 digitized radiographic studies were randomly arranged on both types of workstation over four separate viewing sessions for each emergency physician. There was no statistical difference between a PACS diagnostic workstation and a PC-based workstation in this trial. The mean correct ratings were 59% on the PACS diagnostic workstations and 61% on the PC-based workstations. These findings also emphasize the need for prompt reporting by a radiologist.

  14. General specifications for the development of a USL/DBMS NASA/PC R and D distributed workstation

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Chum, Frank Y.

    1984-01-01

    The general specifications for the development of a PC-based distributed workstation (PCDWS) for an information storage and retrieval systems environment are defined. This research proposes the development of a PCDWS prototype as part of the University of Southwestern Louisiana Data Base Management System (USL/DBMS) NASA/PC R and D project in the PC-based workstation environment.

  15. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  16. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  17. Active Workstations Do Not Impair Executive Function in Young and Middle-Age Adults.

    Science.gov (United States)

    Ehmann, Peter J; Brush, Christopher J; Olson, Ryan L; Bhatt, Shivang N; Banu, Andrea H; Alderman, Brandon L

    2017-05-01

    This study aimed to examine the effects of self-selected low-intensity walking on an active workstation on executive functions (EF) in young and middle-age adults. Using a within-subjects design, 32 young (20.6 ± 2.0 yr) and 26 middle-age (45.6 ± 11.8 yr) adults performed low-intensity treadmill walking and seated control conditions in randomized order on separate days, while completing an EF test battery. EF was assessed using modified versions of the Stroop (inhibition), Sternberg (working memory), Wisconsin Card Sorting (cognitive flexibility), and Tower of London (global EF) cognitive tasks. Behavioral performance outcomes were assessed using composite task z-scores and traditional measures of reaction time and accuracy. Average HR and step count were also measured throughout. The expected task difficulty effects were found for reaction time and accuracy. No significant main effects or interactions as a function of treadmill walking were found for tasks assessing global EF and the three individual EF domains. Accuracy on the Tower of London task was slightly impaired during slow treadmill walking for both age-groups. Middle-age adults displayed longer planning times for more difficult conditions of the Tower of London during walking compared with sitting. A 50-min session of low-intensity treadmill walking on an active workstation resulted in accruing approximately 4500 steps. These findings suggest that executive function performance remains relatively unaffected while walking on an active workstation, further supporting the use of treadmill workstations as an effective approach to increase physical activity and reduce sedentary time in the workplace.

  18. Experience with workstations for accelerator control at the CERN SPS

    International Nuclear Information System (INIS)

    Ogle, A.; Ulander, J.; Wilkie, I.

    1990-01-01

    The CERN super proton synchrotron (SPS) control system is currently undergoing a major long-term upgrade. This paper reviews progress on the high-level application software with particular reference to the operator interface. An important feature of the control-system upgrade is the move from consoles with a number of fixed screens and limited multitasking ability to workstations with the potential to display a large number of windows and perform a number of independent tasks simultaneously. This workstation environment thus permits the operator to run tasks in one machine for which he previously had to monopolize two or even three old consoles. However, the environment also allows the operator to cover the screen with a multitude of windows, leading to complete confusion. Initial requests to present some form of 'global status' of the console proved to be naive, and several iterations were necessary before the operators were satisfied. (orig.)

  19. Clinical impact and value of workstation single sign-on.

    Science.gov (United States)

    Gellert, George A; Crouch, John F; Gibson, Lynn A; Conklin, George S; Webster, S Luke; Gillean, John A

    2017-05-01

    CHRISTUS Health began implementation of computer workstation single sign-on (SSO) in 2015. SSO technology utilizes a badge reader placed at each workstation where clinicians swipe or "tap" their identification badges. To assess the impact of SSO implementation in reducing clinician time logging in to various clinical software programs, and in financial savings from migrating to a thin client that enabled replacement of traditional hard drive computer workstations. Following implementation of SSO, a total of 65,202 logins were sampled systematically during a 7day period among 2256 active clinical end users for time saved in 6 facilities when compared to pre-implementation. Dollar values were assigned to the time saved by 3 groups of clinical end users: physicians, nurses and ancillary service providers. The reduction of total clinician login time over the 7day period showed a net gain of 168.3h per week of clinician time - 28.1h (2.3 shifts) per facility per week. Annualized, 1461.2h of mixed physician and nursing time is liberated per facility per annum (121.8 shifts of 12h per year). The annual dollar cost savings of this reduction of time expended logging in is $92,146 per hospital per annum and $1,658,745 per annum in the first phase implementation of 18 hospitals. Computer hardware equipment savings due to desktop virtualization increases annual savings to $2,333,745. Qualitative value contributions to clinician satisfaction, reduction in staff turnover, facilitation of adoption of EHR applications, and other benefits of SSO are discussed. SSO had a positive impact on clinician efficiency and productivity in the 6 hospitals evaluated, and is an effective and cost-effective method to liberate clinician time from repetitive and time consuming logins to clinical software applications. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Parallel MCNP Monte Carlo transport calculations with MPI

    International Nuclear Information System (INIS)

    Wagner, J.C.; Haghighat, A.

    1996-01-01

    The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected

  1. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  2. Assessment of a cooperative workstation.

    Science.gov (United States)

    Beuscart, R J; Molenda, S; Souf, N; Foucher, C; Beuscart-Zephir, M C

    1996-01-01

    Groupware and new Information Technologies have now made it possible for people in different places to work together in synchronous cooperation. Very often, designers of this new type of software are not provided with a model of the common workspace, which is prejudicial to software development and its acceptance by potential users. The authors take the example of a task of medical co-diagnosis, using a multi-media communication workstation. Synchronous cooperative work is made possible by using local ETHERNET or public ISDN Networks. A detailed ergonomic task analysis studies the cognitive functioning of the physicians involved, compares their behaviour in the normal and the mediatized situations, and leads to an interpretation of the likely causes for success or failure of CSCW tools.

  3. The BioPhotonics Workstation: from university research to commercial prototype

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    I will outline the specifications of the compact BioPhotonics Workstation we recently have developed that utilizes high-speed spatial light modulation to generate an array of reconfigurable laser-traps making 3D real-time optical manipulation of advanced structures possible with the use of joysti...

  4. Fine-grain Parallel Processing On A Commodity Platform: A Solution For The Atlas Second-level Trigger

    CERN Document Server

    Boosten, M

    2003-01-01

    From 2005 on, CERN expects to have a new accelerator available for experiments: the Large Hadron Collider (LHC), with a circumference of 27 kilometres. The ATLAS detector produces 40 TeraBytes/s of data. Only a fraction of all data is interesting. A computer system, called the trigger, selects the interesting data through real-time data analysis. The trigger consists of three subsequent filtering levels: LVL1, LVL2, and LVL3. LVL1 will be implemented using special-purpose hardware. LVL2 and LVL3 will be implemented using a Network Of Workstations (NOW). A major problem is to make efficient use of the computing power available in each workstation. The major contribution of this designer's project is an infrastructure named MESH. MESH enables CERN to cost- effectively implement the LVL2 trigger. Furthermore, due to the use of commodity technology, MESH enables the LVL2 trigger to be cost-effectively upgraded and supported during its 20 year lifecycle. MESH facilitates efficient parallel processing on PCs interc...

  5. Emulating conventional operator interfaces on window-based workstations

    International Nuclear Information System (INIS)

    Carr, G.P.

    1990-01-01

    This paper explores an approach to support the LAMPF and PSR control systems on VAX/VMS workstations using DECwindows and VI Corporation Data Views as the operator interface. The PSR control system was recently turned over to MP division and the two control-system staffs were merged into one group. One of the goals of this new group is to develop a common workstation-based operator console and interface which can be used in a single control room controlling both the linac and proton storage ring. The new console operator interface will need a high-level graphics toolkit for its implementation. During the conversion to the new consoles it will also probably be necessary to write a package to emulate the current operator interfaces at the software level. This paper describes a project to evaluate the appropriateness of VI Corporation's Data Views graphics package for use in the LAMPF control-system environment by using it to write an emulation of the LAMPF touch-panel interface to a large LAMPF control-system application program. A secondary objective of this project was to explore any productivity increases that might be realized by using an object-oriented graphics package and graphics editor. (orig.)

  6. A workstation based spectrometry application for ECR ion source [Paper No.: G5

    International Nuclear Information System (INIS)

    Suresh Babu, R.M.; . PS Div.)

    1993-01-01

    A program for an Electron Cyclotron Resonance (ECR) Ion Source beam diagnostics application in a X-Windows/Motif based workstation environment is discussed. The application program controls the hardware and acquires data via a front end computer across a local area network. The data is subsequently processed for displaying on the workstation console. The timing for data acquisition and control is determined by the particle source timing. The user interface has been implemented using the Motif widget set and the actions have been implemented through call back routines. The equipment interface is through a set of database driven calls across the network. (author). 7 refs., 1 fig

  7. Event analysis using a massively parallel processor

    International Nuclear Information System (INIS)

    Bale, A.; Gerelle, E.; Messersmith, J.; Warren, R.; Hoek, J.

    1990-01-01

    This paper describes a system for performing histogramming of n-tuple data at interactive rates using a commercial SIMD processor array connected to a work-station running the well-known Physics Analysis Workstation software (PAW). Results indicate that an order of magnitude performance improvement over current RISC technology is easily achievable

  8. Experience in using workstations as hosts in an accelerator control environment

    International Nuclear Information System (INIS)

    Abola, A.; Casella, R.; Clifford, T.; Hoff, L.; Katz, R.; Kennell, S.; Mandell, S.; McBreen, E.; Weygand, D.P.

    1987-03-01

    A new control system has been used for light ion acceleration at the Alternating Gradient Synchrotron (AGS). The control system uses Apollo workstations in the dual role of console hardware computer and controls system host. It has been found that having a powerful dedicated CPU with a demand paging virtual memory OS featuring strong interprocess communication, mapped memory shared files, shared code, and multi-window capabilities, allows us to provide an efficient operation environment in which users may view and manage several control processes simultaneously. The same features which make workstations good console computers also provide an outstanding platform for code development. The software for the system, consisting of about 30K lines of ''C'' code, was developed on schedule, ready for light ion commissioning. System development is continuing with work being done on applications programs

  9. Experience in using workstations as hosts in an accelerator control environment

    International Nuclear Information System (INIS)

    Abola, A.; Casella, R.; Clifford, T.; Hoff, L.; Katz, R.; Kennell, S.; Mandell, S.; McBreen, E.; Weygand, D.P.

    1987-01-01

    A new control system has been used for light ion acceleration at the Alternating Gradient Synchrotron (AGS). The control system uses Apollo workstations in the dual role of console hardware computer and controls system host. It has been found that having a powerful dedicated CPU with a demand paging virtual memory OS featuring strong interprocess communication, mapped memory shared files, shared code, and multi-window capabilities, allows us to provide an efficient operation environment in which users may view and manage several control processes simultaneously. The same features which make workstations good console computers also provide an outstanding platform for code development. The software for the system, consisting of about 30K lines of ''C'' code, was developed on schedule, ready for light ion commissioning. System development is continuing with work being done on applications programs

  10. A worldwide flock of Condors : load sharing among workstation clusters

    NARCIS (Netherlands)

    Epema, D.H.J.; Livny, M.; Dantzig, van R.; Evers, X.; Pruyne, J.

    1996-01-01

    Condor is a distributed batch system for sharing the workload of compute-intensive jobs in a pool of unix workstations connected by a network. In such a Condor pool, idle machines are spotted by Condor and allocated to queued jobs, thus putting otherwise unutilized capacity to efficient use. When

  11. A workstation-integrated peer review quality assurance program: pilot study

    International Nuclear Information System (INIS)

    O’Keeffe, Margaret M; Davis, Todd M; Siminoski, Kerry

    2013-01-01

    The surrogate indicator of radiological excellence that has become accepted is consistency of assessments between radiologists, and the technique that has become the standard for evaluating concordance is peer review. This study describes the results of a workstation-integrated peer review program in a busy outpatient radiology practice. Workstation-based peer review was performed using the software program Intelerad Peer Review. Cases for review were randomly chosen from those being actively reported. If an appropriate prior study was available, and if the reviewing radiologist and the original interpreting radiologist had not exceeded review targets, the case was scored using the modified RADPEER system. There were 2,241 cases randomly assigned for peer review. Of selected cases, 1,705 (76%) were interpreted. Reviewing radiologists agreed with prior reports in 99.1% of assessments. Positive feedback (score 0) was given in three cases (0.2%) and concordance (scores of 0 to 2) was assigned in 99.4%, similar to reported rates of 97.0% to 99.8%. Clinically significant discrepancies (scores of 3 or 4) were identified in 10 cases (0.6%). Eighty-eight percent of reviewed radiologists found the reviews worthwhile, 79% found scores appropriate, and 65% felt feedback was appropriate. Two-thirds of radiologists found case rounds discussing significant discrepancies to be valuable. The workstation-based computerized peer review process used in this pilot project was seamlessly incorporated into the normal workday and met most criteria for an ideal peer review system. Clinically significant discrepancies were identified in 0.6% of cases, similar to published outcomes using the RADPEER system. Reviewed radiologists felt the process was worthwhile

  12. A workstation-integrated peer review quality assurance program: pilot study

    Science.gov (United States)

    2013-01-01

    Background The surrogate indicator of radiological excellence that has become accepted is consistency of assessments between radiologists, and the technique that has become the standard for evaluating concordance is peer review. This study describes the results of a workstation-integrated peer review program in a busy outpatient radiology practice. Methods Workstation-based peer review was performed using the software program Intelerad Peer Review. Cases for review were randomly chosen from those being actively reported. If an appropriate prior study was available, and if the reviewing radiologist and the original interpreting radiologist had not exceeded review targets, the case was scored using the modified RADPEER system. Results There were 2,241 cases randomly assigned for peer review. Of selected cases, 1,705 (76%) were interpreted. Reviewing radiologists agreed with prior reports in 99.1% of assessments. Positive feedback (score 0) was given in three cases (0.2%) and concordance (scores of 0 to 2) was assigned in 99.4%, similar to reported rates of 97.0% to 99.8%. Clinically significant discrepancies (scores of 3 or 4) were identified in 10 cases (0.6%). Eighty-eight percent of reviewed radiologists found the reviews worthwhile, 79% found scores appropriate, and 65% felt feedback was appropriate. Two-thirds of radiologists found case rounds discussing significant discrepancies to be valuable. Conclusions The workstation-based computerized peer review process used in this pilot project was seamlessly incorporated into the normal workday and met most criteria for an ideal peer review system. Clinically significant discrepancies were identified in 0.6% of cases, similar to published outcomes using the RADPEER system. Reviewed radiologists felt the process was worthwhile. PMID:23822583

  13. Functionalized 2PP structures for the BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Matsuoka, Tomoyo; Nishi, Masayuki; Sakakura, Masaaki

    2011-01-01

    In its standard version, our BioPhotonics Workstation (BWS) can generate multiple controllable counter-propagating beams to create real-time user-programmable optical traps for stable three-dimensional control and manipulation of a plurality of particles. The combination of the platform with micr...... on the BWS platform by functionalizing them with silica-based sol-gel materials inside which dyes can be entrapped....

  14. Habitat Demonstration Unit Medical Operations Workstation Upgrades

    Science.gov (United States)

    Trageser, Katherine H.

    2011-01-01

    This paper provides an overview of the design and fabrication associated with upgrades for the Medical Operations Workstation in the Habitat Demonstration Unit. The work spanned a ten week period. The upgrades will be used during the 2011 Desert Research and Technology Studies (Desert RATS) field campaign. Upgrades include a deployable privacy curtain system, a deployable tray table, an easily accessible biological waste container, reorganization and labeling of the medical supplies, and installation of a retractable camera. All of the items were completed within the ten week period.

  15. Some Ideas on the Microcomputer and the Information/Knowledge Workstation.

    Science.gov (United States)

    Boon, J. A.; Pienaar, H.

    1989-01-01

    Identifies the optimal goal of knowledge workstations as the harmony of technology and human decision-making behaviors. Two types of decision-making processes are described and the application of each type to experimental and/or operational situations is discussed. Suggestions for technical solutions to machine-user interfaces are then offered.…

  16. Comparison of computer workstation with light box for detecting setup errors from portal images

    International Nuclear Information System (INIS)

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Raghavan, Suraj; Coffey, Christopher S.; Major, Stacey A.; Muller, Keith E.

    1999-01-01

    Purpose: Observer studies were conducted to test the hypothesis that radiation oncologists using a computer workstation for portal image analysis can detect setup errors at least as accurately as when following standard clinical practice of inspecting portal films on a light box. Methods and Materials: In a controlled observer study, nine radiation oncologists used a computer workstation, called PortFolio, to detect setup errors in 40 realistic digitally reconstructed portal radiograph (DRPR) images. PortFolio is a prototype workstation for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools for image enhancement; alignment of crosshairs, field edges, and anatomic structures on reference and acquired images; measurement of distances and angles; and viewing registered images superimposed on one another. The test DRPRs contained known in-plane translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Test images used in the study were also printed on film for observers to view on a light box and interpret using standard clinical practice. The mean accuracy for error detection for each approach was measured and the results were compared using repeated measures analysis of variance (ANOVA) with the Geisser-Greenhouse test statistic. Results: The results indicate that radiation oncologists participating in this study could detect and quantify in-plane rotation and translation errors more accurately with PortFolio compared to standard clinical practice. Conclusions: Based on the results of this limited study, it is reasonable to conclude that workstations similar to PortFolio can be used efficaciously in clinical practice

  17. STATUS OF THE LINUX PC CLUSTER FOR BETWEEN-PULSE DATA ANALYSES AT DIII-D

    International Nuclear Information System (INIS)

    PENG, Q; GROEBNER, R.J; LAO, L.L; SCHACHTER, J.; SCHISSEL, D.P; WADE, M.R.

    2001-08-01

    OAK-B135 Some analyses that survey experimental data are carried out at a sparse sample rate between pulses during tokamak operation and/or completed as a batch job overnight because the complete analysis on a single fast workstation cannot fit in the narrow time window between two pulses. Scientists therefore miss the opportunity to use these results to guide experiments quickly. With a dedicated Beowulf type cluster at a cost less than that of a workstation, these analyses can be accomplished between pulses and the analyzed data made available for the research team during the tokamak operation. A Linux PC cluster comprises of 12 processors was installed at DIII-D National Fusion Facility in CY00 and expanded to 24 processors in CY01 to automatically perform between-pulse magnetic equilibrium reconstructions using the EFIT code written in Fortran, CER analyses using CERQUICK code written in IDL and full profile fitting analyses (n e , T e , T i , V r , Z eff ) using IDL code ZIPFIT. This paper reports the current status of the system and discusses some problems and concerns raised during the implementation and expansion of the system

  18. Space-charge-dominated beam dynamics simulations using the massively parallel processors (MPPs) of the Cray T3D

    International Nuclear Information System (INIS)

    Liu, H.

    1996-01-01

    Computer simulations using the multi-particle code PARMELA with a three-dimensional point-by-point space charge algorithm have turned out to be very helpful in supporting injector commissioning and operations at Thomas Jefferson National Accelerator Facility (Jefferson Lab, formerly called CEBAF). However, this algorithm, which defines a typical N 2 problem in CPU time scaling, is very time-consuming when N, the number of macro-particles, is large. Therefore, it is attractive to use massively parallel processors (MPPs) to speed up the simulations. Motivated by this, the authors modified the space charge subroutine for using the MPPs of the Cray T3D. The techniques used to parallelize and optimize the code on the T3D are discussed in this paper. The performance of the code on the T3D is examined in comparison with a Parallel Vector Processing supercomputer of the Cray C90 and an HP 735/15 high-end workstation

  19. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    International Nuclear Information System (INIS)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-01-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated

  20. A bench-top automated workstation for nucleic acid isolation from clinical sample types.

    Science.gov (United States)

    Thakore, Nitu; Garber, Steve; Bueno, Arial; Qu, Peter; Norville, Ryan; Villanueva, Michael; Chandler, Darrell P; Holmberg, Rebecca; Cooney, Christopher G

    2018-04-18

    Systems that automate extraction of nucleic acid from cells or viruses in complex clinical matrices have tremendous value even in the absence of an integrated downstream detector. We describe our bench-top automated workstation that integrates our previously-reported extraction method - TruTip - with our newly-developed mechanical lysis method. This is the first report of this method for homogenizing viscous and heterogeneous samples and lysing difficult-to-disrupt cells using "MagVor": a rotating magnet that rotates a miniature stir disk amidst glass beads confined inside of a disposable tube. Using this system, we demonstrate automated nucleic acid extraction from methicillin-resistant Staphylococcus aureus (MRSA) in nasopharyngeal aspirate (NPA), influenza A in nasopharyngeal swabs (NPS), human genomic DNA from whole blood, and Mycobacterium tuberculosis in NPA. The automated workstation yields nucleic acid with comparable extraction efficiency to manual protocols, which include commercially-available Qiagen spin column kits, across each of these sample types. This work expands the scope of applications beyond previous reports of TruTip to include difficult-to-disrupt cell types and automates the process, including a method for removal of organics, inside a compact bench-top workstation. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. GPAW - massively parallel electronic structure calculations with Python-based software

    DEFF Research Database (Denmark)

    Enkovaara, Jussi; Romero, Nichols A.; Shende, Sameer

    2011-01-01

    of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix...... popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most...... environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges...

  2. A parallel 3D particle-in-cell code with dynamic load balancing

    International Nuclear Information System (INIS)

    Wolfheimer, Felix; Gjonaj, Erion; Weiland, Thomas

    2006-01-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated

  3. A parallel 3D particle-in-cell code with dynamic load balancing

    Energy Technology Data Exchange (ETDEWEB)

    Wolfheimer, Felix [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)]. E-mail: wolfheimer@temf.de; Gjonaj, Erion [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany); Weiland, Thomas [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)

    2006-03-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated.

  4. Stand by Me: Qualitative Insights into the Ease of Use of Adjustable Workstations.

    Science.gov (United States)

    Leavy, Justine; Jancey, Jonine

    2016-01-01

    Office workers sit for more than 80% of the work day making them an important target for work site health promotion interventions to break up prolonged sitting time. Adjustable workstations are one strategy used to reduce prolonged sitting time. This study provides both an employees' and employers' perspective into the advantages, disadvantages, practicality and convenience of adjustable workstations and how movement in the office can be further supported by organisations. This qualitative study was part of the Uprising pilot study. Employees were from the intervention arm of a two group (intervention n = 18 and control n = 18) study. Employers were the immediate line-manager of the employee. Data were collected via employee focus groups (n = 17) and employer individual interviews (n = 12). The majority of participants were female (n = 18), had healthy weight, and had a post-graduate qualification. All focus group discussions and interviews were recorded, transcribed verbatim and the data coded according to the content. Qualitative content analysis was conducted. Employee data identified four concepts: enhanced general wellbeing; workability and practicality; disadvantages of the retro-fit; and triggers to stand. Most employees (n = 12) reported enhanced general well-being, workability and practicality included less email exchange and positive interaction (n = 5), while the instability of the keyboard a commonly cited disadvantage. Triggers to stand included time and task based prompts. Employer data concepts included: general health and wellbeing; work engagement; flexibility; employee morale; and injury prevention. Over half of the employers (n = 7) emphasised back care and occupational health considerations as important, as well as increased level of staff engagement and strategies to break up prolonged periods of sitting. The focus groups highlight the perceived general health benefits from this short intervention, including opportunity to sit less and interact

  5. Stand by Me: Qualitative Insights into the Ease of Use of Adjustable Workstations

    Directory of Open Access Journals (Sweden)

    Jonine Jancey

    2016-08-01

    Full Text Available Background: Office workers sit for more than 80% of the work day making them an important target for work site health promotion interventions to break up prolonged sitting time. Adjustable workstations are one strategy used to reduce prolonged sitting time. This study provides both an employees’ and employers’ perspective into the advantages, disadvantages, practicality and convenience of adjustable workstations and how movement in the office can be further supported by organisations. This qualitative study was part of the Uprising pilot study. Employees were from the intervention arm of a two group (intervention n = 18 and control n = 18 study. Employers were the immediate line-manager of the employee. Data were collected via employee focus groups (n = 17 and employer individual interviews (n = 12. The majority of participants were female (n = 18, had healthy weight, and had a post-graduate qualification. All focus group discussions and interviews were recorded, transcribed verbatim and the data coded according to the content. Qualitative content analysis was conducted. Results: Employee data identified four concepts: enhanced general wellbeing; workability and practicality; disadvantages of the retro-fit; and triggers to stand. Most employees (n = 12 reported enhanced general well-being, workability and practicality included less email exchange and positive interaction (n = 5, while the instability of the keyboard a commonly cited disadvantage. Triggers to stand included time and task based prompts. Employer data concepts included: general health and wellbeing; work engagement; flexibility; employee morale; and injury prevention. Over half of the employers (n = 7 emphasised back care and occupational health considerations as important, as well as increased level of staff engagement and strategies to break up prolonged periods of sitting. Discussion: The focus groups highlight the perceived general health benefits from this short

  6. A standardized non-instrumental tool for characterizing workstations concerned with exposure to engineered nanomaterials

    Science.gov (United States)

    Canu I, Guseva; C, Ducros; S, Ducamp; L, Delabre; S, Audignon-Durand; C, Durand; Y, Iwatsubo; D, Jezewski-Serra; Bihan O, Le; S, Malard; A, Radauceanu; M, Reynier; M, Ricaud; O, Witschger

    2015-05-01

    The French national epidemiological surveillance program EpiNano aims at surveying mid- and long-term health effects possibly related with occupational exposure to either carbon nanotubes or titanium dioxide nanoparticles (TiO2). EpiNano is limited to workers potentially exposed to these nanomaterials including their aggregates and agglomerates. In order to identify those workers during the in-field industrial hygiene visits, a standardized non-instrumental method is necessary especially for epidemiologists and occupational physicians unfamiliar with nanoparticle and nanomaterial exposure metrology. A working group, Quintet ExpoNano, including national experts in nanomaterial metrology and occupational hygiene reviewed available methods, resources and their practice in order to develop a standardized tool for conducting company industrial hygiene visits and collecting necessary information. This tool, entitled “Onsite technical logbook”, includes 3 parts: company, workplace, and workstation allowing a detailed description of each task, process and exposure surrounding conditions. This logbook is intended to be completed during the company industrial hygiene visit. Each visit is conducted jointly by an industrial hygienist and an epidemiologist of the program and lasts one or two days depending on the company size. When all collected information is computerized using friendly-using software, it is possible to classify workstations with respect to their potential direct and/or indirect exposure. Workers appointed to workstations classified as concerned with exposure are considered as eligible for EpiNano program and invited to participate. Since January 2014, the Onsite technical logbook has been used in ten company visits. The companies visited were mostly involved in research and development. A total of 53 workstations with potential exposure to nanomaterials were pre-selected and observed: 5 with TiO2, 16 with single-walled carbon nanotubes, 27 multiwalled

  7. A standardized non-instrumental tool for characterizing workstations concerned with exposure to engineered nanomaterials

    International Nuclear Information System (INIS)

    I, Guseva Canu; S, Ducamp; L, Delabre; Y, Iwatsubo; D, Jezewski-Serra; C, Ducros; S, Audignon-Durand; C, Durand; O, Le Bihan; S, Malard; A, Radauceanu; M, Reynier; M, Ricaud; O, Witschger

    2015-01-01

    The French national epidemiological surveillance program EpiNano aims at surveying mid- and long-term health effects possibly related with occupational exposure to either carbon nanotubes or titanium dioxide nanoparticles (TiO 2 ). EpiNano is limited to workers potentially exposed to these nanomaterials including their aggregates and agglomerates. In order to identify those workers during the in-field industrial hygiene visits, a standardized non-instrumental method is necessary especially for epidemiologists and occupational physicians unfamiliar with nanoparticle and nanomaterial exposure metrology. A working group, Quintet ExpoNano, including national experts in nanomaterial metrology and occupational hygiene reviewed available methods, resources and their practice in order to develop a standardized tool for conducting company industrial hygiene visits and collecting necessary information. This tool, entitled “Onsite technical logbook”, includes 3 parts: company, workplace, and workstation allowing a detailed description of each task, process and exposure surrounding conditions. This logbook is intended to be completed during the company industrial hygiene visit. Each visit is conducted jointly by an industrial hygienist and an epidemiologist of the program and lasts one or two days depending on the company size. When all collected information is computerized using friendly-using software, it is possible to classify workstations with respect to their potential direct and/or indirect exposure. Workers appointed to workstations classified as concerned with exposure are considered as eligible for EpiNano program and invited to participate. Since January 2014, the Onsite technical logbook has been used in ten company visits. The companies visited were mostly involved in research and development. A total of 53 workstations with potential exposure to nanomaterials were pre-selected and observed: 5 with TiO 2 , 16 with single-walled carbon nanotubes, 27 multiwalled

  8. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  9. Videoconferencing using workstations in the ATLAS collaboration

    International Nuclear Information System (INIS)

    Onions, C.; Blokzijl, K. Bos

    1994-01-01

    The ATLAS collaboration consists of about 1000 physicists from close to 100 institutes around the world. This number is expected to grow over the coming years. The authors realized that they needed to do something to allow people to participate in meetings held at CERN without having to travel and hence they started a pilot project in July, 1993 to look into this. Colleagues from Nikhef already had experience of international network meetings (e.g. RIPE) using standard UNIX workstations and public domain software tools using the MBONE, hence they investigated this as a first priority

  10. Visual observation of digitalised signals by workstations

    International Nuclear Information System (INIS)

    Navratil, J.; Akiyama, A.; Mimashi, T.

    1994-01-01

    The idea to have on-line information about the behavior of betatron tune, as a first step to the future automatic control of TRISTAN accelerator tune, appeared near the end of 1991. At the same time, other suggestions concerning a rejuvenation of the existing Control System arose and therefore the newly created project ''System for monitoring betatron tune'' (SMBT) started with several goals: - to obtain new on-line information about the beam behavior during the acceleration time, - to test the way of possible extension and replacement of the existing control system of TRISTAN, - to get experience with the workstation and XWindow software

  11. Computer modeling and design of diagnostic workstations and radiology reading rooms

    Science.gov (United States)

    Ratib, Osman M.; Amato, Carlos L.; Balbona, Joseph A.; Boots, Kevin; Valentino, Daniel J.

    2000-05-01

    We used 3D modeling techniques to design and evaluate the ergonomics of diagnostic workstation and radiology reading room in the planning phase of building a new hospital at UCLA. Given serious space limitations, the challenge was to provide more optimal working environment for radiologists in a crowded and busy environment. A particular attention was given to flexibility, lighting condition and noise reduction in rooms shared by multiple users performing diagnostic tasks as well as regular clinical conferences. Re-engineering workspace ergonomics rely on the integration of new technologies, custom designed cabinets, indirect lighting, sound-absorbent partitioning and geometric arrangement of workstations to allow better privacy while optimizing space occupation. Innovations included adjustable flat monitors, integration of videoconferencing and voice recognition, control monitor and retractable keyboard for optimal space utilization. An overhead compartment protecting the monitors from ambient light is also used as accessory lightbox and rear-view projection screen for conferences.

  12. Intraoperative non-record-keeping usage of anesthesia information management system workstations and associated hemodynamic variability and aberrancies.

    Science.gov (United States)

    Wax, David B; Lin, Hung-Mo; Reich, David L

    2012-12-01

    Anesthesia information management system workstations in the anesthesia workspace that allow usage of non-record-keeping applications could lead to distraction from patient care. We evaluated whether non-record-keeping usage of the computer workstation was associated with hemodynamic variability and aberrancies. Auditing data were collected on eight anesthesia information management system workstations and linked to their corresponding electronic anesthesia records to identify which application was active at any given time during the case. For each case, the periods spent using the anesthesia information management system record-keeping module were separated from those spent using non-record-keeping applications. The variability of heart rate and blood pressure were also calculated, as were the incidence of hypotension, hypertension, and tachycardia. Analysis was performed to identify whether non-record-keeping activity was a significant predictor of these hemodynamic outcomes. Data were analyzed for 1,061 cases performed by 171 clinicians. Median (interquartile range) non-record-keeping activity time was 14 (1, 38) min, representing 16 (3, 33)% of a median 80 (39, 143) min of procedure time. Variables associated with greater non-record-keeping activity included attending anesthesiologists working unassisted, longer case duration, lower American Society of Anesthesiologists status, and general anesthesia. Overall, there was no independent association between non-record-keeping workstation use and hemodynamic variability or aberrancies during anesthesia either between cases or within cases. Anesthesia providers spent sizable portions of case time performing non-record-keeping applications on anesthesia information management system workstations. This use, however, was not independently associated with greater hemodynamic variability or aberrancies in patients during maintenance of general anesthesia for predominantly general surgical and gynecologic procedures.

  13. Field analysis: approach to the design of teleoperator workstation

    International Nuclear Information System (INIS)

    Saint-Jean, T.; Lescoat, D.A.

    1986-04-01

    Following a brief review of theoretical scope this paper will characterize a methodology to the design of teleoperation workstations. This methodology is illustrated by an example - field analysis of a telemanipulation task in a hot cell. Practical informations are given: operating strategy different from the written procedure, team work organization, different skills. Recommendations are suggested as regards the writing of procedures, the training of personnel and the work organisation

  14. Generalization of Posture Training to Computer Workstations in an Applied Setting

    Science.gov (United States)

    Sigurdsson, Sigurdur O.; Ring, Brandon M.; Needham, Mick; Boscoe, James H.; Silverman, Kenneth

    2011-01-01

    Improving employees' posture may decrease the risk of musculoskeletal disorders. The current paper is a systematic replication and extension of Sigurdsson and Austin (2008), who found that an intervention consisting of information, real-time feedback, and self-monitoring improved participant posture at mock workstations. In the current study,…

  15. Design and Development of an Integrated Workstation Automation Hub

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Andrew; Ghatikar, Girish; Sartor, Dale; Lanzisera, Steven

    2015-03-30

    Miscellaneous Electronic Loads (MELs) account for one third of all electricity consumption in U.S. commercial buildings, and are drivers for a significant energy use in India. Many of the MEL-specific plug-load devices are concentrated at workstations in offices. The use of intelligence, and integrated controls and communications at the workstation for an Office Automation Hub – offers the opportunity to improve both energy efficiency and occupant comfort, along with services for Smart Grid operations. Software and hardware solutions are available from a wide array of vendors for the different components, but an integrated system with interoperable communications is yet to be developed and deployed. In this study, we propose system- and component-level specifications for the Office Automation Hub, their functions, and a prioritized list for the design of a proof-of-concept system. Leveraging the strength of both the U.S. and India technology sectors, this specification serves as a guide for researchers and industry in both countries to support the development, testing, and evaluation of a prototype product. Further evaluation of such integrated technologies for performance and cost is necessary to identify the potential to reduce energy consumptions in MELs and to improve occupant comfort.

  16. Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.

    2011-01-01

    To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the

  17. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  18. Integrated model for line balancing with workstation inventory management

    OpenAIRE

    Dilip Roy; Debdip khan

    2010-01-01

    In this paper, we address the optimization of an integrated line balancing process with workstation inventory management. While doing so, we have studied the interconnection between line balancing and its conversion process. Almost each and every moderate to large manufacturing industry depends on a long and integrated supply chain, consisting of inbound logistic, conversion process and outbound logistic. In this sense an approach addresses a very general problem of integrated line balancing....

  19. A Combined MPI-CUDA Parallel Solution of Linear and Nonlinear Poisson-Boltzmann Equation

    Directory of Open Access Journals (Sweden)

    José Colmenares

    2014-01-01

    Full Text Available The Poisson-Boltzmann equation models the electrostatic potential generated by fixed charges on a polarizable solute immersed in an ionic solution. This approach is often used in computational structural biology to estimate the electrostatic energetic component of the assembly of molecular biological systems. In the last decades, the amount of data concerning proteins and other biological macromolecules has remarkably increased. To fruitfully exploit these data, a huge computational power is needed as well as software tools capable of exploiting it. It is therefore necessary to move towards high performance computing and to develop proper parallel implementations of already existing and of novel algorithms. Nowadays, workstations can provide an amazing computational power: up to 10 TFLOPS on a single machine equipped with multiple CPUs and accelerators such as Intel Xeon Phi or GPU devices. The actual obstacle to the full exploitation of modern heterogeneous resources is efficient parallel coding and porting of software on such architectures. In this paper, we propose the implementation of a full Poisson-Boltzmann solver based on a finite-difference scheme using different and combined parallel schemes and in particular a mixed MPI-CUDA implementation. Results show great speedups when using the two schemes, achieving an 18.9x speedup using three GPUs.

  20. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  1. [From data entry to data presentation at a clinical workstation--experiences with Anesthesia Information Management Systems (AIMS)].

    Science.gov (United States)

    Benson, M; Junger, A; Quinzio, L; Michel, A; Sciuk, G; Fuchs, C; Marquardt, K; Hempelmannn, G

    2000-09-01

    Anesthesia Information Management Systems (AIMS) are required to supply large amounts of data for various purposes such as performance recording, quality assurance, training, operating room management and research. It was our objective to establish an AIMS that enables every member of the department to independently access queries at his/her work station and at the same time allows the presentation of data in a suitable manner in order to increase the transfer of different information to the clinical workstation. Apple Macintosh Clients (Apple Computer, Inc. Cupertino, California) and the file- and database servers were installed into the already partially existing hospital network. The most important components installed on each computer are the anesthesia documenting software NarkoData (ProLogic GmbH, Erkrath), HIS client software and a HTML browser. More than 250 queries for easy evaluation were formulated with the software Voyant (Brossco Systems, Espoo, Finland). Together with the documentation they are the evaluation module of the AIMS. Today, more than 20,000 anesthesia procedures are recorded each year at 112 decentralised workstations with the AIMS. In 1998, 90.8% of the 20,383 performed anesthetic procedures were recorded online and 9.2% entered postopeatively into the system. With a corresponding user access it is possible to receive all available patient data at each single anesthesiological workstation via HIS (diagnoses, laboratory results) anytime. The available information includes previous anesthesia records, statistics and all data available from the hospitals intranet. This additional information is of great advantage in comparison to previous working conditions. The implementation of an AIMS allowed to greatly enhance the quota but also the quality of documentation and an increased flow of information at the anesthesia workstation. The circuit between data entry and the presentation and evaluation of data, statistics and results directly

  2. Jet formation and equatorial superrotation in Jupiter's atmosphere: Numerical modelling using a new efficient parallel code

    Science.gov (United States)

    Rivier, Leonard Gilles

    Using an efficient parallel code solving the primitive equations of atmospheric dynamics, the jet structure of a Jupiter like atmosphere is modeled. In the first part of this thesis, a parallel spectral code solving both the shallow water equations and the multi-level primitive equations of atmospheric dynamics is built. The implementation of this code called BOB is done so that it runs effectively on an inexpensive cluster of workstations. A one dimensional decomposition and transposition method insuring load balancing among processes is used. The Legendre transform is cache-blocked. A "compute on the fly" of the Legendre polynomials used in the spectral method produces a lower memory footprint and enables high resolution runs on relatively small memory machines. Performance studies are done using a cluster of workstations located at the National Center for Atmospheric Research (NCAR). BOB performances are compared to the parallel benchmark code PSTSWM and the dynamical core of NCAR's CCM3.6.6. In both cases, the comparison favors BOB. In the second part of this thesis, the primitive equation version of the code described in part I is used to study the formation of organized zonal jets and equatorial superrotation in a planetary atmosphere where the parameters are chosen to best model the upper atmosphere of Jupiter. Two levels are used in the vertical and only large scale forcing is present. The model is forced towards a baroclinically unstable flow, so that eddies are generated by baroclinic instability. We consider several types of forcing, acting on either the temperature or the momentum field. We show that only under very specific parametric conditions, zonally elongated structures form and persist resembling the jet structure observed near the cloud level top (1 bar) on Jupiter. We also study the effect of an equatorial heat source, meant to be a crude representation of the effect of the deep convective planetary interior onto the outer atmospheric layer. We

  3. Analysis on the influence of supply method on a workstation with the help of dynamic simulation

    Directory of Open Access Journals (Sweden)

    Gavriluță Alin

    2017-01-01

    Full Text Available Considering the need of flexibility in any manufacturing process, the choice of the supply method of an assembly workstation can be a decision with instead influence on its performances. Using dynamic simulation, this article wants to compare the effect on a workstation cycle time of three different supply methods: supply on stock, supply in “Strike Zone” and synchronous supply. This study is part of an extended work that has the aim of compering by 3D layout design and dynamic simulation, different supply methods on an assembly line performances.

  4. Issues about home computer workstations and primary school children in Hong Kong: a pilot study.

    Science.gov (United States)

    Py Szeto, Grace; Tsui, Macy Mei Sze; Sze, Winky Wing Yu; Chan, Irene Sin Ting; Chung, Cyrus Chak Fai; Lee, Felix Wai Kit

    2014-01-01

    All around the world, there is a rising trend of computer use among young children especially at home; yet the computer furniture is usually not designed specifically for children's use. In Hong Kong, this creates an even greater problem as most people live in very small apartments in high-rise buildings. Most of the past research literature is focused on computer use in children in the school environment and not about the home setting. The present pilot study aimed to examine ergonomic issues in children's use of computers at home in Hong Kong, which has some unique home environmental issues. Fifteen children (six male, nine female) aged 8-11 years and their parents were recruited by convenience sampling. Participants were asked to provide information on their computer use habits and related musculoskeletal symptoms. Participants were photographed when sitting at the computer workstation in their usual postures and joint angles were measured. The participants used computers frequently for less than two hours daily and the majority shared their workstations with other family members. Computer furniture was designed more for adult use and a mismatch of furniture and body size was found. Ergonomic issues included inappropriate positioning of the display screen, keyboard, and mouse, as well as lack of forearm support and suitable backrest. These led to awkward or constrained postures while some postural problems may be habitual. Three participants reported neck and shoulder discomfort in the past 12 months and 4 reported computer-related discomfort. Inappropriate computer workstation settings may have adverse effects on children's postures. More research on workstation setup at home, where children may use their computers the most, is needed.

  5. Thermal load at workstations in the underground coal mining: Results of research carried out in 6 coal mines

    Directory of Open Access Journals (Sweden)

    Krzysztof Słota

    2016-08-01

    Full Text Available Background: Statistics shows that almost half of Polish extraction in underground mines takes place at workstations where temperature exceeds 28°C. The number of employees working in such conditions is gradually increasing, therefore, the problem of safety and health protection is still growing. Material and Methods: In the present study we assessed the heat load of employees at different workstations in the mining industry, taking into account current thermal conditions and work costs. The evaluation of energy cost of work was carried out in 6 coal mines. A total of 221 miners employed at different workstations were assessed. Individual groups of miners were characterized and thermal safety of the miners was assessed relying on thermal discomfort index. Results: The results of this study indicate considerable differences in the durations of analyzed work processes at individual workstations. The highest average energy cost was noted during the work performed in the forehead. The lowest value was found in the auxiliary staff. The calculated index of discomfort clearly indicated numerous situations in which the admissible range of thermal load exceeded the parameters of thermal load safe for human health. It should be noted that the values of average labor cost fall within the upper, albeit admissible, limits of thermal load. Conclusions: The results of the study indicate that in some cases work in mining is performed in conditions of thermal discomfort. Due to high variability and complexity of work conditions it becomes necessary to verify the workers’ load at different workstations, which largely depends on the environmental conditions and work organization, as well as on the performance of workers themselves. Med Pr 2016;67(4:477–498

  6. Applying human factors to the design of control centre and workstation of a nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Isaac J.A. Luquetti dos; Carvalho, Paulo V.R.; Goncalves, Gabriel de L., E-mail: luquetti@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Souza, Tamara D.M.F.; Falcao, Mariana A. [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil). Dept. de Desenho Industrial

    2013-07-01

    Human factors is a body of scientific factors about human characteristics, covering biomedical, psychological and psychosocial considerations, including principles and applications in the personnel selection areas, training, job performance aid tools and human performance evaluation. Control Centre is a combination of control rooms, control suites and local control stations which are functionally related and all on the same site. Digital control room includes an arrangement of systems, equipment such as computers and communication terminals and workstations at which control and monitoring functions are conducted by operators. Inadequate integration between control room and operators reduces safety, increases the operation complexity, complicates operator training and increases the likelihood of human errors occurrence. The objective of this paper is to present a specific approach for the conceptual and basic design of the control centre and workstation of a nuclear reactor used to produce radioisotope. The approach is based on human factors standards, guidelines and the participation of a multidisciplinary team in the conceptual and basic phases of the design. Using the information gathered from standards and from the multidisciplinary team, an initial sketch 3D of the control centre and workstation are being developed. (author)

  7. Applying human factors to the design of control centre and workstation of a nuclear reactor

    International Nuclear Information System (INIS)

    Santos, Isaac J.A. Luquetti dos; Carvalho, Paulo V.R.; Goncalves, Gabriel de L.; Souza, Tamara D.M.F.; Falcao, Mariana A.

    2013-01-01

    Human factors is a body of scientific factors about human characteristics, covering biomedical, psychological and psychosocial considerations, including principles and applications in the personnel selection areas, training, job performance aid tools and human performance evaluation. Control Centre is a combination of control rooms, control suites and local control stations which are functionally related and all on the same site. Digital control room includes an arrangement of systems, equipment such as computers and communication terminals and workstations at which control and monitoring functions are conducted by operators. Inadequate integration between control room and operators reduces safety, increases the operation complexity, complicates operator training and increases the likelihood of human errors occurrence. The objective of this paper is to present a specific approach for the conceptual and basic design of the control centre and workstation of a nuclear reactor used to produce radioisotope. The approach is based on human factors standards, guidelines and the participation of a multidisciplinary team in the conceptual and basic phases of the design. Using the information gathered from standards and from the multidisciplinary team, an initial sketch 3D of the control centre and workstation are being developed. (author)

  8. Montecarlo Simulations for a Lep Experiment with Unix Workstation Clusters

    Science.gov (United States)

    Bonesini, M.; Calegari, A.; Rossi, P.; Rossi, V.

    Modular systems of RISC CPU based computers have been implemented for large productions of Montecarlo simulated events for the DELPHI experiment at CERN. From a pilot system based on DEC 5000 CPU’s, a full size system based on a CONVEX C3820 UNIX supercomputer and a cluster of HP 735 workstations has been put into operation as a joint effort between INFN Milano and CILEA.

  9. Fast Calibration of Industrial Mobile Robots to Workstations using QR Codes

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard; Damgaard, Jens Skov; Madsen, Ole

    2013-01-01

    is proposed. With this QR calibration, it is possible to calibrate an AIMM to a workstation in 3D in less than 1 second, which is significantly faster than existing methods. The accuracy of the calibration is ±4 mm. The method is modular in the sense that it directly supports integration and calibration...

  10. Computed radiography and the workstation in a study of the cervical spine. Technical and cost implications

    International Nuclear Information System (INIS)

    Garcia, J. M.; Lopez-Galiacho, N.; Martinez, M.

    1999-01-01

    To demonstrate the advantages of computed radiography and the workstation in assessing the images acquired in a study of the cervical spine. Lateral projections of cervical spine obtained using a computed radiography system in 63 ambulatory patients were studied in a workstation. Images of the tip of the odontoid process. C1-C2, basion-opisthion and C7 were visualized prior to and after their transmission and processing, and the overall improvement in their diagnostic quality was assessed. The rate of detection of the tip of the odontoid process, C1-C2, the foramen magnum and C/ increased by 17,6, 11 and 14 percentage points, respectively. Image processing improved the diagnostic quality in over 75% of cases. Image processing in a workstation improved the visualization of the anatomical points being studied and the diagnostic quality of the images. These advantages as well as the possibility of transferring the images to a picture archiving and communication system (PACS) are convincing reasons for using digital radiography. (Author) 7 refs

  11. Algorithms for parallel flow solvers on message passing architectures

    Science.gov (United States)

    Vanderwijngaart, Rob F.

    1995-01-01

    The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those

  12. Graphical user interface for a robotic workstation in a surgical environment.

    Science.gov (United States)

    Bielski, A; Lohmann, C P; Maier, M; Zapp, D; Nasseri, M A

    2016-08-01

    Surgery using a robotic system has proven to have significant potential but is still a highly challenging task for the surgeon. An eye surgery assistant has been developed to eliminate the problem of tremor caused by human motions endangering the outcome of ophthalmic surgery. In order to exploit the full potential of the robot and improve the workflow of the surgeon, providing the ability to change control parameters live in the system as well as the ability to connect additional ancillary systems is necessary. Additionally the surgeon should always be able to get an overview over the status of all systems with a quick glance. Therefore a workstation has been built. The contribution of this paper is the design and the implementation of an intuitive graphical user interface for this workstation. The interface has been designed with feedback from surgeons and technical staff in order to ensure its usability in a surgical environment. Furthermore, the system was designed with the intent of supporting additional systems with minimal additional effort.

  13. Functionalizing 2PP-fabricated microtools for optical manipulation on the BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Matsuoka, Tomoyo; Nishi, Masayuki; Sakakura, Masaaki

    Functionalization of the structures fabricated by two-photon polymerization was achieved by coating them with sol-gel materials, which contain calcium indicators. The structures are expected to work potentially as nano-sensors on the BioPhotonics Workstation....

  14. Development of an EVA systems cost model. Volume 2: Shuttle orbiter crew and equipment translation concepts and EVA workstation concept development and integration

    Science.gov (United States)

    1975-01-01

    EVA crewman/equipment translational concepts are developed for a shuttle orbiter payload application. Also considered are EVA workstation systems to meet orbiter and payload requirements for integration of workstations into candidate orbiter payload worksites.

  15. Embedding knowledge in a workstation

    Energy Technology Data Exchange (ETDEWEB)

    Barber, G

    1982-01-01

    This paper describes an approach to supporting work in the office. Using and extending ideas from the field of artificial intelligence (AI) it describes office work as a problem solving activity. A knowledge embedding language called OMEGA is used to embed knowledge of the organization into an office worker's workstation in order to support the office worker in his or her problem solving. A particular approach to reasoning about change and contradiction is discussed. This approach uses OMEGA's viewpoint mechanism. OMEGA's viewpoint mechanism is a general contradiction handling facility. Unlike other knowledge representation systems, when a contradiction is reached the reasons for the contradiction can be analyzed by the reduction mechanism without having to resort to a backtracking mechanism. The viewpoint mechanism is the heart of the problem solving support paradigm. This paradigm is an alternative to the classical view of problem solving in AI. Office workers are supported using the problem solving support paradigm. 16 references.

  16. Evaluation plan for a cardiological multi-media workstation (I4C project)

    NARCIS (Netherlands)

    Hofstede, J.W. van der; Quak, A.B.; Ginneken, A.M. van; Macfarlane, P.W.; Watson, J.; Hendriks, P.R.; Zeelenberg, C.

    1997-01-01

    The goal of the I4C project (Integration and Communication for the Continuity of Cardiac Care) is to build a multi-media workstation for cardiac care and to assess its impact in the clinical setting. This paper describes the technical evaluation plan for the prototype.

  17. Design considerations for a neuroradiologic picture archival and image processing workstation

    International Nuclear Information System (INIS)

    Fishbein, D.S.

    1986-01-01

    The design and implementation of a small scale image archival and processing workstation for use in the study of digitized neuroradiologic images is described. The system is designed to be easily interfaced to existing equipment (presently PET, NMR and CT), function independent of a central file server, and provide for a versatile image processing environment. (Auth.)

  18. Effect of immediate feedback training on observer performance on a digital radiology workstation

    International Nuclear Information System (INIS)

    Mc Neill, K.M.; Maloney, K.; Elam, E.A.; Hillman, B.J.; Witzke, D.B.

    1990-01-01

    This paper reports on testing the hypothesis that training radiologists on a digital workstation would affect their efficiency and subjective acceptance of radiologic interpretation based on images shown on a cathode ray tub (CRT). Using a digital radiology workstation, six faculty radiologists and four senior residents read seven groups of six images each. In each group, three images were ranked as easy and three were ranked as difficult. All images were abnormal posteroanterior chest radiographs. On display of each image, the observer was asked which findings were present. After the observer listed his or her findings, the experimenter listed any findings not mentioned and pointed out any incorrect findings. The time to finding was recorded for each image, along with the number of corrections and missed findings. A postexperiment questionnaire was given to obtain subjective responses from the observers

  19. Robotic, MEMS-based Multi Utility Sample Preparation Instrument for ISS Biological Workstation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project will develop a multi-functional, automated sample preparation instrument for biological wet-lab workstations on the ISS. The instrument is based on a...

  20. Networking issues---Lan and Wan needs---The impact of workstations

    International Nuclear Information System (INIS)

    Harvey, J.

    1990-01-01

    This review focuses on the use of networks in the LEP experiments at CERN. The role of the extended LAN at CERN is discussed in some detail, with particular emphasis on the impact the sudden growth in the use of workstations is having. The problem of network congestion is highlighted and possible evolution to FDDI mentioned. The status and use of the wide area connections are also reported

  1. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  2. Monte Carlo dose calculation algorithm on a distributed system

    International Nuclear Information System (INIS)

    Chauvie, Stephane; Dominoni, Matteo; Marini, Piergiorgio; Stasi, Michele; Pia, Maria Grazia; Scielzo, Giuseppe

    2003-01-01

    The main goal of modern radiotherapy, such as 3D conformal radiotherapy and intensity-modulated radiotherapy is to deliver a high dose to the target volume sparing the surrounding healthy tissue. The accuracy of dose calculation in a treatment planning system is therefore a critical issue. Among many algorithms developed over the last years, those based on Monte Carlo proven to be very promising in terms of accuracy. The most severe obstacle in application to clinical practice is the high time necessary for calculations. We have studied a high performance network of Personal Computer as a realistic alternative to a high-costs dedicated parallel hardware to be used routinely as instruments of evaluation of treatment plans. We set-up a Beowulf Cluster, configured with 4 nodes connected with low-cost network and installed MC code Geant4 to describe our irradiation facility. The MC, once parallelised, was run on the Beowulf Cluster. The first run of the full simulation showed that the time required for calculation decreased linearly increasing the number of distributed processes. The good scalability trend allows both statistically significant accuracy and good time performances. The scalability of the Beowulf Cluster system offers a new instrument for dose calculation that could be applied in clinical practice. These would be a good support particularly in high challenging prescription that needs good calculation accuracy in zones of high dose gradient and great dishomogeneities

  3. A study on optimal task decomposition of networked parallel computing using PVM

    International Nuclear Information System (INIS)

    Seong, Kwan Jae; Kim, Han Gyoo

    1998-01-01

    A numerical study is performed to investigate the effect of task decomposition on networked parallel processes using Parallel Virtual Machine (PVM). In our study, a PVM program distributed over a network of workstations is used in solving a finite difference version of a one dimensional heat equation, where natural choice of PVM programming structure would be the master-slave paradigm, with the aim of finding an optimal configuration resulting in least computing time including communication overhead among machines. Given a set of PVM tasks comprised of one master and five slave programs, it is found that there exists a pseudo-optimal number of machines, which does not necessarily coincide with the number of tasks, that yields the best performance when the network is under a light usage. Increasing the number of machines beyond this optimal one does not improve computing performance since increase in communication overhead among the excess number of machines offsets the decrease in CPU time obtained by distributing the PVM tasks among these machines. However, when the network traffic is heavy, the results exhibit a more random characteristic that is explained by the random nature of data transfer time

  4. Temporal digital subtraction radiography with a personal computer digital workstation

    International Nuclear Information System (INIS)

    Kircos, L.; Holt, W.; Khademi, J.

    1990-01-01

    Technique have been developed and implemented on a personal computer (PC)-based digital workstation to accomplish temporal digital subtraction radiography (TDSR). TDSR is useful in recording radiologic change over time. Thus, this technique is useful not only for monitoring chronic disease processes but also for monitoring the temporal course of interventional therapies. A PC-based digital workstation was developed on a PC386 platform with add-in hardware and software. Image acquisition, storage, and processing was accomplished using 512 x 512 x 8- or 12-bit frame grabber. Software and hardware were developed to accomplish image orientation, registration, gray scale compensation, subtraction, and enhancement. Temporal radiographs of the jaws were made in a fixed and reproducible orientation between the x-ray source and image receptor enabling TDSR. Temporal changes secondary to chronic periodontal disease, osseointegration of endosseous implants, and wound healing were demonstrated. Use of TDSR for chest imaging was also demonstrated with identification of small, subtle focal masses that were not apparent with routine viewing. The large amount of radiologic information in images of the jaws and chest may obfuscate subtle changes that TDSR seems to identify. TDSR appears to be useful as a tool to record temporal and subtle changes in radiologic images

  5. Biomek Cell Workstation: A Variable System for Automated Cell Cultivation.

    Science.gov (United States)

    Lehmann, R; Severitt, J C; Roddelkopf, T; Junginger, S; Thurow, K

    2016-06-01

    Automated cell cultivation is an important tool for simplifying routine laboratory work. Automated methods are independent of skill levels and daily constitution of laboratory staff in combination with a constant quality and performance of the methods. The Biomek Cell Workstation was configured as a flexible and compatible system. The modified Biomek Cell Workstation enables the cultivation of adherent and suspension cells. Until now, no commercially available systems enabled the automated handling of both types of cells in one system. In particular, the automated cultivation of suspension cells in this form has not been published. The cell counts and viabilities were nonsignificantly decreased for cells cultivated in AutoFlasks in automated handling. The proliferation of manual and automated bioscreening by the WST-1 assay showed a nonsignificant lower proliferation of automatically disseminated cells associated with a mostly lower standard error. The disseminated suspension cell lines showed different pronounced proliferations in descending order, starting with Jurkat cells followed by SEM, Molt4, and RS4 cells having the lowest proliferation. In this respect, we successfully disseminated and screened suspension cells in an automated way. The automated cultivation and dissemination of a variety of suspension cells can replace the manual method. © 2015 Society for Laboratory Automation and Screening.

  6. Beam dynamics calculations and particle tracking using massively parallel processors

    International Nuclear Information System (INIS)

    Ryne, R.D.; Habib, S.

    1995-01-01

    During the past decade massively parallel processors (MPPs) have slowly gained acceptance within the scientific community. At present these machines typically contain a few hundred to one thousand off-the-shelf microprocessors and a total memory of up to 32 GBytes. The potential performance of these machines is illustrated by the fact that a month long job on a high end workstation might require only a few hours on an MPP. The acceptance of MPPs has been slow for a variety of reasons. For example, some algorithms are not easily parallelizable. Also, in the past these machines were difficult to program. But in recent years the development of Fortran-like languages such as CM Fortran and High Performance Fortran have made MPPs much easier to use. In the following we will describe how MPPs can be used for beam dynamics calculations and long term particle tracking

  7. GPU: the biggest key processor for AI and parallel processing

    Science.gov (United States)

    Baji, Toru

    2017-07-01

    Two types of processors exist in the market. One is the conventional CPU and the other is Graphic Processor Unit (GPU). Typical CPU is composed of 1 to 8 cores while GPU has thousands of cores. CPU is good for sequential processing, while GPU is good to accelerate software with heavy parallel executions. GPU was initially dedicated for 3D graphics. However from 2006, when GPU started to apply general-purpose cores, it was noticed that this architecture can be used as a general purpose massive-parallel processor. NVIDIA developed a software framework Compute Unified Device Architecture (CUDA) that make it possible to easily program the GPU for these application. With CUDA, GPU started to be used in workstations and supercomputers widely. Recently two key technologies are highlighted in the industry. The Artificial Intelligence (AI) and Autonomous Driving Cars. AI requires a massive parallel operation to train many-layers of neural networks. With CPU alone, it was impossible to finish the training in a practical time. The latest multi-GPU system with P100 makes it possible to finish the training in a few hours. For the autonomous driving cars, TOPS class of performance is required to implement perception, localization, path planning processing and again SoC with integrated GPU will play a key role there. In this paper, the evolution of the GPU which is one of the biggest commercial devices requiring state-of-the-art fabrication technology will be introduced. Also overview of the GPU demanding key application like the ones described above will be introduced.

  8. PARALLEL INTEGRATION ALGORITHM AND ITS USAGE FOR A PRACTICAL SIMULATION OF SPACECRAFT ATTITUDE MOTION

    Directory of Open Access Journals (Sweden)

    Ravil’ Kudermetov

    2018-02-01

    Full Text Available Nowadays multi-core processors are installed almost in each modern workstation, but the question of these computational resources effective utilization is still a topical one. In this paper the four-point block one-step integration method is considered, the parallel algorithm of this method is proposed and the Java programmatic implementation of this algorithm is discussed. The effectiveness of the proposed algorithm is demonstrated by way of spacecraft attitude motion simulation. The results of this work can be used for practical simulation of dynamic systems that are described by ordinary differential equations. The results are also applicable to the development and debugging of computer programs that integrate the dynamic and kinematic equations of the angular motion of a rigid body.

  9. Simulation model of a single-server order picking workstation using aggregate process times

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.; Biles, W.E.; Saltelli, A.; Dini, C.

    2009-01-01

    In this paper we propose a simulation modeling approach based on aggregate process times for the performance analysis of order picking workstations in automated warehouses with first-in-first-out processing of orders. The aggregate process time distribution is calculated from tote arrival and

  10. A high performance image processing platform based on CPU-GPU heterogeneous cluster with parallel image reconstroctions for micro-CT

    International Nuclear Information System (INIS)

    Ding Yu; Qi Yujin; Zhang Xuezhu; Zhao Cuilan

    2011-01-01

    In this paper, we report the development of a high-performance image processing platform, which is based on CPU-GPU heterogeneous cluster. Currently, it consists of a Dell Precision T7500 and HP XW8600 workstations with parallel programming and runtime environment, using the message-passing interface (MPI) and CUDA (Compute Unified Device Architecture). We succeeded in developing parallel image processing techniques for 3D image reconstruction of X-ray micro-CT imaging. The results show that a GPU provides a computing efficiency of about 194 times faster than a single CPU, and the CPU-GPU clusters provides a computing efficiency of about 46 times faster than the CPU clusters. These meet the requirements of rapid 3D image reconstruction and real time image display. In conclusion, the use of CPU-GPU heterogeneous cluster is an effective way to build high-performance image processing platform. (authors)

  11. Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.

    Science.gov (United States)

    Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti

    2006-02-01

    Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.

  12. The development of a Flight Test Engineer's Workstation for the Automated Flight Test Management System

    Science.gov (United States)

    Tartt, David M.; Hewett, Marle D.; Duke, Eugene L.; Cooper, James A.; Brumbaugh, Randal W.

    1989-01-01

    The Automated Flight Test Management System (ATMS) is being developed as part of the NASA Aircraft Automation Program. This program focuses on the application of interdisciplinary state-of-the-art technology in artificial intelligence, control theory, and systems methodology to problems of operating and flight testing high-performance aircraft. The development of a Flight Test Engineer's Workstation (FTEWS) is presented, with a detailed description of the system, technical details, and future planned developments. The goal of the FTEWS is to provide flight test engineers and project officers with an automated computer environment for planning, scheduling, and performing flight test programs. The FTEWS system is an outgrowth of the development of ATMS and is an implementation of a component of ATMS on SUN workstations.

  13. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes

    Energy Technology Data Exchange (ETDEWEB)

    Lichtner, Peter C. [OFM Research, Redmond, WA (United States); Hammond, Glenn E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Chuan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bisht, Gautam [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, Benjamin [National Center for Atmospheric Research, Boulder, CO (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Richard [Intel Corporation, Portland, OR (United States); Univ. of Tennessee, Knoxville, TN (United States); Kumar, Jitendra [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-20

    PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 232 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.

  14. 76 FR 21775 - Notice of Issuance of Final Determination Concerning Certain Office Workstations

    Science.gov (United States)

    2011-04-18

    ... Ethospace office workstations both feature ``frame-and-tile'' construction, which consists of a sturdy steel... respect to the frames, Herman Miller staff roll form rolled steel (coils) from a domestic source into....-sourced tiles, frames, connectors, finished ends, work surfaces, flipper door unit, shelf, task lights...

  15. Design of the HANARO operator workstation having the enhanced usability and data handling capability

    International Nuclear Information System (INIS)

    Kim, M. J.; Kim, Y. K.; Jung, H. S.; Choi, Y. S.; Woo, J. S.; Jeon, B. J.

    2003-01-01

    As a first step to the upgrade plan of the HANARO reactor control computer system, we furnished IBM workstation class PC to replace the existing operator workstation, the dedicated HMI console. Also designed is the new human-machine interface by using the commercial HMI development software that is operating on the MS-Windows. We expect that we would not have any more difficulties in preparing replacement parts and providing maintenance of hardware. In this paper, we introduce the features of new interface, which adopted the virtue of the existing design and enabled the safe and efficient reactor operation by correcting the demerits. Also described are the functionality of historian server that provides the simpler storage, retrieval and search operation and the design of trend display screen that replaces the existing chart recorder by using the dual monitor feature of PC graphic card

  16. A computer graphics pilot project - Spacecraft mission support with an interactive graphics workstation

    Science.gov (United States)

    Hagedorn, John; Ehrner, Marie-Jacqueline; Reese, Jodi; Chang, Kan; Tseng, Irene

    1986-01-01

    The NASA Computer Graphics Pilot Project was undertaken to enhance the quality control, productivity and efficiency of mission support operations at the Goddard Operations Support Computing Facility. The Project evolved into a set of demonstration programs for graphics intensive simulated control room operations, particularly in connection with the complex space missions that began in the 1980s. Complex mission mean more data. Graphic displays are a means to reduce the probabilities of operator errors. Workstations were selected with 1024 x 768 pixel color displays controlled by a custom VLSI chip coupled to an MC68010 chip running UNIX within a shell that permits operations through the medium of mouse-accessed pulldown window menus. The distributed workstations run off a host NAS 8040 computer. Applications of the system for tracking spacecraft orbits and monitoring Shuttle payload handling illustrate the system capabilities, noting the built-in capabilities of shifting the point of view and rotating and zooming in on three-dimensional views of spacecraft.

  17. Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.

    Science.gov (United States)

    Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana

    2017-11-01

    RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Computer-aided diagnosis workstation and telemedicine network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2009-02-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. To overcome these problems, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The functions to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and "Success in login" effective. As a result, patients' private information is protected. We can share the screen of Web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with workstation. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and

  19. The effectiveness of domain balancing strategies on workstation clusters demonstrated by viscous flow problems

    NARCIS (Netherlands)

    Streng, Martin; Streng, M.; ten Cate, Eric; ten Cate, Eric (H.H.); Geurts, Bernardus J.; Kuerten, Johannes G.M.

    1998-01-01

    We consider several aspects of efficient numerical simulation of viscous compressible flow on both homogeneous and heterogeneous workstation-clusters. We consider dedicated systems, as well as clusters operating in a multi-user environment. For dedicated homogeneous clusters, we show that with

  20. Micro machining workstation for a diode pumped Nd:YAG high brightness laser system

    NARCIS (Netherlands)

    Kleijhorst, R.A.; Offerhaus, Herman L.; Bant, P.

    1998-01-01

    A Nd:YAG micro-machining workstation that allows cutting on a scale of a few microns has been developed and operated. The system incorporates a telescope viewing system that allows control during the work and a software interface to translate AutoCad files. Some examples of the performance are

  1. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  2. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2012-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  3. Validation of COG10 and ENDFB6R7 on the Auk Workstation for General Application to Highly Enriched Uranium Systems

    Energy Technology Data Exchange (ETDEWEB)

    Percher, Catherine G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-08-08

    The COG 10 code package1 on the Auk workstation is now validated with the ENBFB6R7 neutron cross section library for general application to highly enriched uranium (HEU) systems by comparison of the calculated keffective to the expected keffective of several relevant experimental benchmarks. This validation is supplemental to the installation and verification of COG 10 on the Auk workstation2.

  4. GROMACS 4.5: A high-throughput and highly parallel open source molecular simulation toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Pronk, Sander [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Pall, Szilard [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Schulz, Roland [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Larsson, Per [Univ. of Virginia, Charlottesville, VA (United States); Bjelkmar, Par [Science for Life Lab., Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden); Apostolov, Rossen [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Shirts, Michael R. [Univ. of Virginia, Charlottesville, VA (United States); Smith, Jeremy C. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kasson, Peter M. [Univ. of Virginia, Charlottesville, VA (United States); van der Spoel, David [Science for Life Lab., Stockholm (Sweden); Uppsala Univ., Uppsala (Sweden); Hess, Berk [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Lindahl, Erik [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden)

    2013-02-13

    In this study, molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. As a result, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations.

  5. Effects of dynamic workstation Oxidesk on acceptance, physical activity, mental fitness and work performance

    NARCIS (Netherlands)

    Groenesteijn, L.; Commissaris, D.A.C.M.; Berg-Zwetsloot, M. van den; Hiemstra-Van Mastrigt, S.

    2016-01-01

    BACKGROUND: Working in an office environment is characterised by physical inactivity and sedentary behaviour. This behaviour contributes to several health risks in the long run. Dynamic workstations which allow people to combine desk activities with physical activity, may contribute to prevention of

  6. Effects of dynamic workstation Oxidesk on acceptance, physical activity, mental fitness and work performance.

    Science.gov (United States)

    Groenesteijn, L; Commissaris, D A C M; Van den Berg-Zwetsloot, M; Hiemstra-Van Mastrigt, S

    2016-07-19

    Working in an office environment is characterised by physical inactivity and sedentary behaviour. This behaviour contributes to several health risks in the long run. Dynamic workstations which allow people to combine desk activities with physical activity, may contribute to prevention of these health risks. A dynamic workstation, called Oxidesk, was evaluated to determine the possible contribution to healthy behaviour and the impact on perceived work performance. A field test was conducted with 22 office workers, employed at a health insurance company in the Netherlands. The Oxidesk was well accepted, positively perceived for fitness and the participants maintained their work performance. Physical activity was lower than the activity level required in the Dutch guidelines for sufficient physical activity. Although there was a slight increase in physical activity, the Oxidesk may be helpful in the reducing health risks involved and seems applicable for introduction to office environments.

  7. The advanced software development workstation project

    Science.gov (United States)

    Fridge, Ernest M., III; Pitman, Charles L.

    1991-01-01

    The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.

  8. Guided Learning at Workstations about Drug Prevention with Low Achievers in Science Education

    Science.gov (United States)

    Thomas, Heyne; Bogner, Franz X.

    2012-01-01

    Our study focussed on the cognitive achievement potential of low achieving eighth graders, dealing with drug prevention (cannabis). The learning process was guided by a teacher, leading this target group towards a modified learning at workstations which is seen as an appropriate approach for low achievers. We compared this specific open teaching…

  9. Flow time prediction for a single-server order picking workstation using aggregate process times

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.

    2010-01-01

    In this paper we propose a simulation modeling approach based on aggregate process times for the performance analysis of order picking workstations in automated warehouses. The aggregate process time distribution is calculated from tote arrival and departure times. We refer to the aggregate process

  10. Parallelization of ultrasonic field simulations for non destructive testing

    International Nuclear Information System (INIS)

    Lambert, Jason

    2015-01-01

    The Non Destructive Testing field increasingly uses simulation. It is used at every step of the whole control process of an industrial part, from speeding up control development to helping experts understand results. During this thesis, a fast ultrasonic field simulation tool dedicated to the computation of an ultrasonic field radiated by a phase array probe in an isotropic specimen has been developed. During this thesis, a simulation tool dedicated to the fast computation of an ultrasonic field radiated by a phased array probe in an isotropic specimen has been developed. Its performance enables an interactive usage. To benefit from the commonly available parallel architectures, a regular model (aimed at removing divergent branching) derived from the generic CIVA model has been developed. First, a reference implementation was developed to validate this model against CIVA results, and to analyze its performance behaviour before optimization. The resulting code has been optimized for three kinds of parallel architectures commonly available in workstations: general purpose processors (GPP), many-core co-processors (Intel MIC) and graphics processing units (nVidia GPU). On the GPP and the MIC, the algorithm was reorganized and implemented to benefit from both parallelism levels, multithreading and vector instructions. On the GPU, the multiple steps of field computing have been divided in multiple successive CUDA kernels. Moreover, libraries dedicated to each architecture were used to speedup Fast Fourier Transforms, Intel MKL on GPP and MIC and nVidia cuFFT on GPU. Performance and hardware adequation of the produced codes were thoroughly studied for each architecture. On multiple realistic control configurations, interactive performance was reached. Perspectives to address more complex configurations were drawn. Finally, the integration and the industrialization of this code in the commercial NDT platform CIVA is discussed. (author) [fr

  11. A real-time monitoring/emergency response workstation using a 3-D numerical model initialized with SODAR

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-01-01

    Many workstation based emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, we have implemented the three-dimensional-diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy's Atmospheric Release Advisory Capability project

  12. Evaluation of total workstation CT interpretation quality: a single-screen pilot study

    Science.gov (United States)

    Beard, David V.; Perry, John R.; Muller, Keith E.; Misra, Ram B.; Brown, P.; Hemminger, Bradley M.; Johnston, Richard E.; Mauro, J. Matthew; Jaques, P. F.; Schiebler, M.

    1991-07-01

    An interpretation report, generated with an electronic viewbox, is affected by two factors: image quality, which encompasses what can be seen on the display, and computer human interaction (CHI), which accounts for the cognitive load effect of locating, moving, and manipulating images with the workstation controls. While a number of subject experiments have considered image quality, only recently has the affect of CHI on total interpretation quality been measured. This paper presents the results of a pilot study conducted to evaluate the total interpretation quality of the FilmPlane2.2 radiology workstation for patient folders containing single forty-slice CT studies. First, radiologists interpreted cases and dictated reports using FilmPlane2.2. Requisition forms were provided. Film interpretation was provided by the original clinical report and interpretation forms generated from a previous experiment. Second, an evaluator developed a list of findings for each case based on those listed in all the reports for each case and then evaluated each report for its response on each finding. Third, the reports were compared to determine how well they agreed with one another. Interpretation speed and observation data was also gathered.

  13. Integrating UNIX workstation into existing online data acquisition systems for Fermilab experiments

    International Nuclear Information System (INIS)

    Oleynik, G.

    1991-03-01

    With the availability of cost effective computing prior from multiple vendors of UNIX workstations, experiments at Fermilab are adding such computers to their VMS based online data acquisition systems. In anticipation of this trend, we have extended the software products available in our widely used VAXONLINE and PANDA data acquisition software systems, to provide support for integrating these workstations into existing distributed online systems. The software packages we are providing pave the way for the smooth migration of applications from the current Data Acquisition Host and Monitoring computers running the VMS operating systems, to UNIX based computers of various flavors. We report on software for Online Event Distribution from VAXONLINE and PANDA, integration of Message Reporting Facilities, and a framework under UNIX for experiments to monitor and view the raw event data produced at any level in their DA system. We have developed software that allows host UNIX computers to communicate with intelligent front-end embedded read-out controllers and processor boards running the pSOS operating system. Both RS-232 and Ethernet control paths are supported. This enables calibration and hardware monitoring applications to be migrated to these platforms. 6 refs., 5 figs

  14. Children and computer use in the home: workstations, behaviors and parental attitudes.

    Science.gov (United States)

    Kimmerly, Lisa; Odell, Dan

    2009-01-01

    This study examines the home computer use of 26 children (aged 6-18) in ten upper middle class families using direct observation, typing tests, questionnaires and semi-structured interviews. The goals of the study were to gather information on how children use computers in the home and to understand how both parents and children perceive this computer use. Large variations were seen in computing skills, behaviors, and opinions, as well as equipment and workstation setups. Typing speed averaged over 40 words per minute for children over 13 years old, and less than 10 words per minute for children younger than 10. The results show that for this sample, Repetitive Stress Injury (RSI) concerns ranked very low among parents, whereas security and privacy concerns ranked much higher. Meanwhile, children's behaviors and workstations were observed to place children in awkward working postures. Photos showing common postures are presented. The greatest opportunity to improve children's work postures appears to be in providing properly-sized work surfaces and chairs, as well as education. Possible explanations for the difference between parental perception of computing risks and the physical reality of children's observed ergonomics are discussed and ideas for further research are proposed.

  15. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  16. Portable, parallel, reusable Krylov space codes

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B.; Gropp, W. [Argonne National Lab., IL (United States)

    1994-12-31

    Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.

  17. Progress of data processing system in JT-60 utilizing the UNIX-based workstations

    International Nuclear Information System (INIS)

    Sakata, Shinya; Kiyono, Kimihiro; Oshima, Takayuki; Sato, Minoru; Ozeki, Takahisa

    2007-07-01

    JT-60 data processing system (DPS) possesses three-level hierarchy. At the top level of hierarchy is JT-60 inter-shot processor (MSP-ISP), which is a mainframe computer, provides communication with the JT-60 supervisory control system and supervises the internal communication inside the DPS. The middle level of hierarchy has minicomputers and the bottom level of hierarchy has individual diagnostic subsystems, which consist of the CAMAC and VME modules. To meet the demand for advanced diagnostics, the DPS has been progressed in stages from a three-level hierarchy system, which was dependent on the processing power of the MSP-ISP, to a two-level hierarchy system, which is decentralized data processing system (New-DPS) by utilizing the UNIX-based workstations and network technology. This replacement had been accomplished, and the New-DPS has been started to operate in October 2005. In this report, we describe the development and improvement of the New-DPS, whose functions were decentralized from the MSP-ISP to the UNIX-based workstations. (author)

  18. Internationalization of healthcare applications: a generic approach for PACS workstations.

    Science.gov (United States)

    Hussein, R; Engelmann, U; Schroeter, A; Meinzer, H P

    2004-01-01

    Along with the revolution of information technology and the increasing use of computers world-wide, software providers recognize the emerging need for internationalized, or global, software applications. The importance of internationalization comes from its benefits such as addressing a broader audience, making the software applications more accessible, easier to use, more flexible to support and providing users with more consistent information. In addition, some governmental agencies, e.g., in Spain, accept only fully localized software. Although the healthcare communication standards, namely, Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7) support wide areas of internationalization, most of the implementers are still protective about supporting the complex languages. This paper describes a generic internationalization approach for Picture Archiving and Communication System (PACS) workstations. The Unicode standard is used to internationalize the application user interface. An encoding converter was developed to encode and decode the data between the rendering module (in Unicode encoding) and the DICOM data (in ISO 8859 encoding). An integration gateway was required to integrate the internationalized PACS components with the different PACS installations. To introduce a pragmatic example, the described approach was applied to the CHILI PACS workstation. The approach has enabled the application to handle the different internationalization aspects transparently, such as supporting complex languages, switching between different languages at runtime, and supporting multilingual clinical reports. In the healthcare enterprises, internationalized applications play an essential role in supporting a seamless flow of information between the heterogeneous multivendor information systems.

  19. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  20. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  1. ParaHaplo 3.0: A program package for imputation and a haplotype-based whole-genome association study using hybrid parallel computing

    Directory of Open Access Journals (Sweden)

    Kamatani Naoyuki

    2011-05-01

    Full Text Available Abstract Background Use of missing genotype imputations and haplotype reconstructions are valuable in genome-wide association studies (GWASs. By modeling the patterns of linkage disequilibrium in a reference panel, genotypes not directly measured in the study samples can be imputed and used for GWASs. Since millions of single nucleotide polymorphisms need to be imputed in a GWAS, faster methods for genotype imputation and haplotype reconstruction are required. Results We developed a program package for parallel computation of genotype imputation and haplotype reconstruction. Our program package, ParaHaplo 3.0, is intended for use in workstation clusters using the Intel Message Passing Interface. We compared the performance of ParaHaplo 3.0 on the Japanese in Tokyo, Japan and Han Chinese in Beijing, and Chinese in the HapMap dataset. A parallel version of ParaHaplo 3.0 can conduct genotype imputation 20 times faster than a non-parallel version of ParaHaplo. Conclusions ParaHaplo 3.0 is an invaluable tool for conducting haplotype-based GWASs. The need for faster genotype imputation and haplotype reconstruction using parallel computing will become increasingly important as the data sizes of such projects continue to increase. ParaHaplo executable binaries and program sources are available at http://en.sourceforge.jp/projects/parallelgwas/releases/.

  2. Sled Tests Using the Hybrid III Rail Safety ATD and Workstation Tables for Passenger Trains

    Science.gov (United States)

    2017-08-01

    The Hybrid III Rail Safety (H3-RS) anthropomorphic test device (ATD) is a crash test dummy developed in the United Kingdom to evaluate abdomen and lower thorax injuries that occur when passengers impact workstation tables during train accidents. The ...

  3. Portfolio: a prototype workstation for development and evaluation of tools for analysis and management of digital portal images

    International Nuclear Information System (INIS)

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Friedman, Charles P.; Rosenman, Julian G.

    1998-01-01

    Purpose: The purpose of this investigation was to design and implement a prototype physician workstation, called PortFolio, as a platform for developing and evaluating, by means of controlled observer studies, user interfaces and interactive tools for analyzing and managing digital portal images. The first observer study was designed to measure physician acceptance of workstation technology, as an alternative to a view box, for inspection and analysis of portal images for detection of treatment setup errors. Methods and Materials: The observer study was conducted in a controlled experimental setting to evaluate physician acceptance of the prototype workstation technology exemplified by PortFolio. PortFolio incorporates a windows user interface, a compact kit of carefully selected image analysis tools, and an object-oriented data base infrastructure. The kit evaluated in the observer study included tools for contrast enhancement, registration, and multimodal image visualization. Acceptance was measured in the context of performing portal image analysis in a structured protocol designed to simulate clinical practice. The acceptability and usage patterns were measured from semistructured questionnaires and logs of user interactions. Results: Radiation oncologists, the subjects for this study, perceived the tools in PortFolio to be acceptable clinical aids. Concerns were expressed regarding user efficiency, particularly with respect to the image registration tools. Conclusions: The results of our observer study indicate that workstation technology is acceptable to radiation oncologists as an alternative to a view box for clinical detection of setup errors from digital portal images. Improvements in implementation, including more tools and a greater degree of automation in the image analysis tasks, are needed to make PortFolio more clinically practical

  4. Parallel FFT using Eden Skeletons

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Lobachev, Oleg

    2009-01-01

    The paper investigates and compares skeleton-based Eden implementations of different FFT-algorithms on workstation clusters with distributed memory. Our experiments show that the basic divide-and-conquer versions suffer from an inherent input distribution and result collection problem. Advanced...

  5. Reactive wavepacket dynamics for four atom systems on scalable parallel computers

    International Nuclear Information System (INIS)

    Goldfield, E.M.

    1994-01-01

    While time-dependent quantum mechanics has been successfully applied to many three atom systems, it was nevertheless a computational challenge to use wavepacket methods to study four atom systems, systems with several heavy atoms, and systems with deep potential wells. S.K. Gray and the author are studying the reaction of OH + CO ↔ (HOCO) ↔ H + CO 2 , a difficult reaction by all the above criteria. Memory considerations alone made it impossible to use a single IBM RS/6000 workstation to study a four degree-of-freedom model of this system. They have developed a scalable parallel wavepacket code for the IBM SP1 and have run it on the SP1 at Argonne and at the Cornell Theory Center. The wavepacket, defined on a four dimensional grid, is spread out among the processors. Two-dimensional FFT's are used to compute the kinetic energy operator acting on the wavepacket. Accomplishing this task, which is the computationally intensive part of the calculation, requires a global transpose of the data. This transpose is the only serious communication between processors. Since the problem is essentially data-parallel, communication is regular and load-balancing is excellent. But as the problem is moderately fine-grained and messages are long, the ratio of communication to computation is somewhat high and they typically get about 55% of ideal speed-up

  6. Design of a Workstation for People with Upper-Limb Disabilities Using a Brain Computer Interface

    Directory of Open Access Journals (Sweden)

    John E. Muñoz-Cardona

    2013-11-01

    Full Text Available  This paper shows the design of work-station for work-related inclusion people upper-limb disability. The system involves the use of novel brain computer interface used to bridge the user-computer interaction. Our hope objective is elucidating functional, technological, ergonomic and procedural aspects to runaway operation station; with propose to scratch barrier to impossibility access to TIC’s tools and work done for individual disability person. We found access facility ergonomics, adaptability and portable issue of workstation are most important design criteria. Prototype implementations in workplace environment have TIR estimate of 43% for retrieve. Finally we list a typology of services that could be the most appropriate for the process of labor including: telemarketing, telesales, telephone surveys, order taking, social assistance in disasters, general information and inquiries, reservations at tourist sites, technical support, emergency, online support and after-sales services.

  7. EL CLUSTER BEOWULF DEL CENTRO NACIONAL DE BIOINFORMÁTICA: DISEÑO, MONTAJE Y EVALUACIÓN PRELIMINAR

    Directory of Open Access Journals (Sweden)

    Juan Pedro Febles Rodríguez

    2003-12-01

    Full Text Available

    La utilización de

     

    cluster de computadoras en diferentes campos de investigación que requieren cálculos masivos se ha incrementado en los últimos años desde que Becker y Sterling construyeron el primer cluster Beowulf en 1994. En este artículo se describe el diseño -desde la selección de los componentes-, montaje y evaluación del cluster. Con respecto a los dos primeros aspectos, la explicación se limita a la descripción de la arquitectura de hardware y software del cluster. Para la evaluación del desempeño del cluster se utilizan varios programas benchmark y se comparan los resultados con los de otro cluster similar al tratado. Finalmente, se discuten las posibles causas de las diferencias observadas y se propone

  8. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  9. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit.

    Science.gov (United States)

    Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R; Smith, Jeremy C; Kasson, Peter M; van der Spoel, David; Hess, Berk; Lindahl, Erik

    2013-04-01

    Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. GROMACS is an open source and free software available from http://www.gromacs.org. Supplementary data are available at Bioinformatics online.

  10. NASA Tech Briefs, January 2006

    Science.gov (United States)

    2006-01-01

    Topics covered include: Semiautonomous Avionics-and-Sensors System for a UAV; Biomimetic/Optical Sensors for Detecting Bacterial Species; System Would Detect Foreign-Object Damage in Turbofan Engine; Detection of Water Hazards for Autonomous Robotic Vehicles; Fuel Cells Utilizing Oxygen From Air at Low Pressures; Hybrid Ion-Detector/Data-Acquisition System for a TOF-MS; Spontaneous-Desorption Ionizer for a TOF-MS; Equipment for On-Wafer Testing From 220 to 325 GHz; Computing Isentropic Flow Properties of Air/R-134a Mixtures; Java Mission Evaluation Workstation System; Using a Quadtree Algorithm To Assess Line of Sight; Software for Automated Generation of Cartesian Meshes; Optics Program Modified for Multithreaded Parallel Computing; Programs for Testing Processor-in-Memory Computing Systems; PVM Enhancement for Beowulf Multiple-Processor Nodes; Ion-Exclusion Chromatography for Analyzing Organics in Water; Selective Plasma Deposition of Fluorocarbon Films on SAMs; Water-Based Pressure-Sensitive Paints; System Finds Horizontal Location of Center of Gravity; Predicting Tail Buffet Loads of a Fighter Airplane; Water Containment Systems for Testing High-Speed Flywheels; Vapor-Compression Heat Pumps for Operation Aboard Spacecraft; Multistage Electrophoretic Separators; Recovering Residual Xenon Propellant for an Ion Propulsion System; Automated Solvent Seaming of Large Polyimide Membranes; Manufacturing Precise, Lightweight Paraboloidal Mirrors; Analysis of Membrane Lipids of Airborne Micro-Organisms; Noninvasive Diagnosis of Coronary Artery Disease Using 12-Lead High-Frequency Electrocardiograms; Dual-Laser-Pulse Ignition; Enhanced-Contrast Viewing of White-Hot Objects in Furnaces; Electrically Tunable Terahertz Quantum-Cascade Lasers; Few-Mode Whispering-Gallery-Mode Resonators; Conflict-Aware Scheduling Algorithm; and Real-Time Diagnosis of Faults Using a Bank of Kalman Filters.

  11. A versatile nondestructive evaluation imaging workstation

    Science.gov (United States)

    Chern, E. James; Butler, David W.

    1994-01-01

    Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.

  12. Active workstation allows office workers to work efficiently while sitting and exercising moderately.

    Science.gov (United States)

    Koren, Katja; Pišot, Rado; Šimunič, Boštjan

    2016-05-01

    To determine the effects of a moderate-intensity active workstation on time and error during simulated office work. The aim of the study was to analyse simultaneous work and exercise for non-sedentary office workers. We monitored oxygen uptake, heart rate, sweating stains area, self-perceived effort, typing test time with typing error count and cognitive performance during 30 min of exercise with no cycling or cycling at 40 and 80 W. Compared baseline, we found increased physiological responses at 40 and 80 W, which corresponds to moderate physical activity (PA). Typing time significantly increased by 7.3% (p = 0.002) in C40W and also by 8.9% (p = 0.011) in C80W. Typing error count and cognitive performance were unchanged. Although moderate intensity exercise performed on cycling workstation during simulated office tasks increases working task execution time with, it has moderate effect size; however, it does not increase the error rate. Participants confirmed that such a working design is suitable for achieving the minimum standards for daily PA during work hours. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Automated processing of forensic casework samples using robotic workstations equipped with nondisposable tips: contamination prevention.

    Science.gov (United States)

    Frégeau, Chantal J; Lett, C Marc; Elliott, Jim; Yensen, Craig; Fourney, Ron M

    2008-05-01

    An automated process has been developed for the analysis of forensic casework samples using TECAN Genesis RSP 150/8 or Freedom EVO liquid handling workstations equipped exclusively with nondisposable tips. Robot tip cleaning routines have been incorporated strategically within the DNA extraction process as well as at the end of each session. Alternative options were examined for cleaning the tips and different strategies were employed to verify cross-contamination. A 2% sodium hypochlorite wash (1/5th dilution of the 10.8% commercial bleach stock) proved to be the best overall approach for preventing cross-contamination of samples processed using our automated protocol. The bleach wash steps do not adversely impact the short tandem repeat (STR) profiles developed from DNA extracted robotically and allow for major cost savings through the implementation of fixed tips. We have demonstrated that robotic workstations equipped with fixed pipette tips can be used with confidence with properly designed tip washing routines to process casework samples using an adapted magnetic bead extraction protocol.

  14. SunFast: A sun workstation based, fuel analysis scoping tool for pressurized water reactors

    International Nuclear Information System (INIS)

    Bohnhoff, W.J.

    1991-05-01

    The objective of this research was to develop a fuel cycle scoping program for light water reactors and implement the program on a workstation class computer. Nuclear fuel management problems are quite formidable due to the many fuel arrangement options available. Therefore, an engineer must perform multigroup diffusion calculations for a variety of different strategies in order to determine an optimum core reload. Standard fine mesh finite difference codes result in a considerable computational cost. A better approach is to build upon the proven reliability of currently available mainframe computer programs, and improve the engineering efficiency by taking advantage of the most useful characteristic of workstations: enhanced man/machine interaction. This dissertation contains a description of the methods and a user's guide for the interactive fuel cycle scoping program, SunFast. SunFast provides computational speed and accuracy of solution along with a synergetic coupling between the user and the machine. It should prove to be a valuable tool when extensive sets of similar calculations must be done at a low cost as is the case for assessing fuel management strategies. 40 refs

  15. Experiences with serial and parallel algorithms for channel routing using simulated annealing

    Science.gov (United States)

    Brouwer, Randall Jay

    1988-01-01

    Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.

  16. Increasing physical activity in office workers ? the Inphact Treadmill study; a study protocol for a 13-month randomized controlled trial of treadmill workstations

    OpenAIRE

    Bergman, Frida; Boraxbekk, Carl-Johan; Wennberg, Patrik; S?rlin, Ann; Olsson, Tommy

    2015-01-01

    Background Sedentary behaviour is an independent risk factor for mortality and morbidity, especially for type 2 diabetes. Since office work is related to long periods that are largely sedentary, it is of major importance to find ways for office workers to engage in light intensity physical activity (LPA). The Inphact Treadmill study aims to investigate the effects of installing treadmill workstations in offices compared to conventional workstations. Methods/Design A two-arm, 13-month, randomi...

  17. Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Directory of Open Access Journals (Sweden)

    Gianfranco Ciardo

    2009-12-01

    Full Text Available State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1 parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2 symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal.

  18. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.

  19. An advanced tube wear and fatigue workstation to predict flow induced vibrations of steam generator tubes

    International Nuclear Information System (INIS)

    Gay, N.; Baratte, C.; Flesch, B.

    1997-01-01

    Flow induced tube vibration damage is a major concern for designers and operators of nuclear power plant steam generators (SG). The operating flow-induced vibrational behaviour has to be estimated accurately to allow a precise evaluation of the new safety margins in order to optimize the maintenance policy. For this purpose, an industrial 'Tube Wear and Fatigue Workstation', called 'GEVIBUS Workstation' and based on an advanced methodology for predictive analysis of flow-induced vibration of tube bundles subject to cross-flow has been developed at Electricite de France. The GEVIBUS Workstation is an interactive processor linking modules as: thermalhydraulic computation, parametric finite element builder, interface between finite element model, thermalhydraulic code and vibratory response computations, refining modelling of fluid-elastic and random forces, linear and non-linear dynamic response and the coupled fluid-structure system, evaluation of tube damage due to fatigue and wear, graphical outputs. Two practical applications are also presented in the paper; the first simulation refers to an experimental set-up consisting of a straight tube bundle subject to water cross-flow, while the second one deals with an industrial configuration which has been observed in some operating steam generators i.e., top tube support plate degradation. In the first case the GEVIBUS predictions in terms of tube displacement time histories and phase planes have been found in very good agreement with experiment. In the second application the GEVIBUS computation showed that a tube with localized degradation is much more stable than a tube located in an extended degradation zone. Important conclusions are also drawn concerning maintenance. (author)

  20. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Directory of Open Access Journals (Sweden)

    Cieślik Marcin

    2011-02-01

    Full Text Available Abstract Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'. A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption. An add-on module ('NuBio' facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures and functionality (e.g., to parse/write standard file formats. Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and

  1. From LESSEPS to the workstation for reliability engineers

    International Nuclear Information System (INIS)

    Ancelin, C.; Bouissou, M.; Collet, J.; Gallois, M.; Magne, L.; Villatte, N.; Yedid, C.; Mulet-Marquis, D.

    1994-01-01

    Three Mile Island and Chernobyl in the nuclear industry, Challenger, in the space industry, Seveso and Bhopal in the chemical industry - all these accidents show how difficult it is to forecast all likely accident scenarios that may occur in complex systems. This was, however, the objective of the probabilistic safety assessment (PSA) performed by EDF at the Paluel nuclear power plant. The full computerization of this study led to the LESSEPS project, aimed at automating three different steps: generation of reliability models -based on the use of expert systems, qualitative and quantitative processing of these models using computer codes, and overall management of PSA studies. This paper presents the results obtained and the gradual transformation of this first generation of tools into a workstation aimed at integrating reliability studies at all stages of an industrial process. (author)

  2. The effect of a sit-stand workstation intervention on daily sitting, standing and physical activity: protocol for a 12 month workplace randomised control trial.

    Science.gov (United States)

    Hall, Jennifer; Mansfield, Louise; Kay, Tess; McConnell, Alison K

    2015-02-15

    A lack of physical activity and excessive sitting can contribute to poor physical health and wellbeing. The high percentage of the UK adult population in employment, and the prolonged sitting associated with desk-based office-work, make these workplaces an appropriate setting for interventions to reduce sedentary behaviour and increase physical activity. This pilot study aims to determine the effect of an office-based sit-stand workstation intervention, compared with usual desk use, on daily sitting, standing and physical activity, and to examine the factors that underlie sitting, standing and physical activity, within and outside, the workplace. A randomised control trial (RCT) comparing the effects of a sit-stand workstation only and a multi-component sit-stand workstation intervention, with usual desk-based working practice (no sit-stand workstation) will be conducted with office workers across two organisations, over a 12 month period (N = 30). The multicomponent intervention will comprise organisational, environmental and individual elements. Objective data will be collected at baseline, and after 2-weeks, 3-months, 6-months and 12-months of the intervention. Objective measures of sitting, standing, and physical activity will be made concurrently (ActivPAL3™ and ActiGraph (GT3X+)). Activity diaries, ethnographic participant observation, and interviews with participants and key organisational personnel will be used to elicit understanding of the influence of organisational culture on sitting, standing and physical activity behaviour in the workplace. This study will be the first long-term sit-stand workstation intervention study utilising an RCT design, and incorporating a comprehensive process evaluation. The study will generate an understanding of the factors that encourage and restrict successful implementation of sit-stand workstation interventions, and will help inform future occupational wellbeing policy and practice. Other strengths include the

  3. Comparison of personal computer with CT workstation in the evaluation of 3-dimensional CT image of the skull

    International Nuclear Information System (INIS)

    Kang, Bok Hee; Kim, Kee Deog; Park, Chang Seo

    2001-01-01

    To evaluate the usefulness of the reconstructed 3-dimensional image on the personal computer in comparison with that of the CT workstation by quantitative comparison and analysis. The spiral CT data obtained from 27 persons were transferred from the CT workstation to a personal computer, and they were reconstructed as 3-dimensional image on the personal computer using V-works 2.0 TM . One observer obtained the 14 measurements on the reconstructed 3-dimensional image on both the CT workstation and the personal computer. Paired test was used to evaluate the intraobserver difference and the mean value of the each measurement on the CT workstation and the personal computer. Pearson correlation analysis and % imcongruence were also performed. I-Gn, N-Gn, N-A, N-Ns, B-A and G-Op did not show any statistically significant difference (p>0.05), B-O, B-N, Eu-Eu, Zy-Zy, Biw, D-D, Orbrd R, and L had statistically significant difference (p<0.05), but the mean values of the differences of all measurements were below 2 mm, except for D-D. The value of correlation coefficient γ was greater than 0.95 at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and it was 0.75 at B-O, 0.78 at D-D, and 0.82 at both Orbrb R and L. The % incongruence was below 4% at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and 7.18%, 10.78%, 4.97%, 5.89% at B-O, D-D, Orbrb R and L respectively. It can be considered that the utilization of the personal computer has great usefulness in reconstruction of the 3-dimensional image when it comes to the economics, accessibility and convenience, except for thin bones and the landmarks which and difficult to be located

  4. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software. Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.

    1991-11-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization Plans for Word Processors, Personal Computers, Workstations, and Associated Software to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference that documents the plans of each organization for office automation, identifies appropriate planners and other contact people in those organizations, and encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan.

  5. Survey of ANL organization plans for word processors, personal computers, workstations, and associated software. Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R.; Rockwell, V.S.

    1992-08-01

    The Computing and Telecommunications Division (CTD) has compiled this Survey of ANL Organization plans for Word Processors, Personal Computers, Workstations, and Associated Software (ANL/TM, Revision 4) to provide DOE and Argonne with a record of recent growth in the acquisition and use of personal computers, microcomputers, and word processors at ANL. Laboratory planners, service providers, and people involved in office automation may find the Survey useful. It is for internal use only, and any unauthorized use is prohibited. Readers of the Survey should use it as a reference document that (1) documents the plans of each organization for office automation, (2) identifies appropriate planners and other contact people in those organizations and (3) encourages the sharing of this information among those people making plans for organizations and decisions about office automation. The Survey supplements information in both the ANL Statement of Site Strategy for Computing Workstations (ANL/TM 458) and the ANL Site Response for the DOE Information Technology Resources Long-Range Plan (ANL/TM 466).

  6. Time synchronization algorithm of distributed system based on server time-revise and workstation self-adjust

    International Nuclear Information System (INIS)

    Zhou Shumin; Sun Yamin; Tang Bin

    2007-01-01

    In order to enhance the time synchronization quality of the distributed system, a time synchronization algorithm of distributed system based on server time-revise and workstation self-adjust is proposed. The time-revise cycle and self-adjust process is introduced in the paper. The algorithm reduces network flow effectively and enhances the quality of clock-synchronization. (authors)

  7. ESCRIME: testing bench for advanced operator workstations in future plants

    International Nuclear Information System (INIS)

    Poujol, A.; Papin, B.

    1994-01-01

    The problem of optimal task allocation between man and computer for the operation of nuclear power plants is of major concern for the design of future plants. As the increased level of automation induces the modification of the tasks actually devoted to the operator in the control room, it is very important to anticipate these consequences at the plant design stage. The improvement of man machine cooperation is expected to play a major role in minimizing the impact of human errors on plant safety. The CEA has launched a research program concerning the evolution of the plant operation in order to optimize the efficiency of the human/computer systems for a better safety. The objective of this program is to evaluate different modalities of man-machine share of tasks, in a representative context. It relies strongly upon the development of a specific testing facility, the ESCRIME work bench, which is presented in this paper. It consists of an EDF 1300MWe PWR plant simulator connected to an operator workstation. The plant simulator model presents at a significant level of details the instrumentation and control of the plant and the main connected circuits. The operator interface is based on the generalization of the use of interactive graphic displays, and is intended to be consistent to the tasks to be performed by the operator. The functional architecture of the workstation is modular, so that different cooperation mechanisms can be implemented within the same framework. It is based on a thorough analysis and structuration of plant control tasks, in normal as well as in accident situations. The software architecture design follows the distributed artificial intelligence approach. Cognitive agents cooperate in order to operate the process. The paper presents the basic principles and the functional architecture of the test bed and describes the steps and the present status of the program. (author)

  8. Use of the stereoscopic virtual reality display system for the detection and characterization of intracranial aneurysms: A Icomparison with conventional computed tomography workstation and 3D rotational angiography.

    Science.gov (United States)

    Liu, Xiujuan; Tao, Haiquan; Xiao, Xigang; Guo, Binbin; Xu, Shangcai; Sun, Na; Li, Maotong; Xie, Li; Wu, Changjun

    2018-07-01

    This study aimed to compare the diagnostic performance of the stereoscopic virtual reality display system with the conventional computed tomography (CT) workstation and three-dimensional rotational angiography (3DRA) for intracranial aneurysm detection and characterization, with a focus on small aneurysms and those near the bone. First, 42 patients with suspected intracranial aneurysms underwent both 256-row CT angiography (CTA) and 3DRA. Volume rendering (VR) images were captured using the conventional CT workstation. Next, VR images were transferred to the stereoscopic virtual reality display system. Two radiologists independently assessed the results that were obtained using the conventional CT workstation and stereoscopic virtual reality display system. The 3DRA results were considered as the ultimate reference standard. Based on 3DRA images, 38 aneurysms were confirmed in 42 patients. Two cases were misdiagnosed and 1 was missed when the traditional CT workstation was used. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the conventional CT workstation were 94.7%, 85.7%, 97.3%, 75%, and99.3%, respectively, on a per-aneurysm basis. The stereoscopic virtual reality display system missed a case. The sensitivity, specificity, PPV, NPV, and accuracy of the stereoscopic virtual reality display system were 100%, 85.7%, 97.4%, 100%, and 97.8%, respectively. No difference was observed in the accuracy of the traditional CT workstation, stereoscopic virtual reality display system, and 3DRA in detecting aneurysms. The stereoscopic virtual reality display system has some advantages in detecting small aneurysms and those near the bone. The virtual reality stereoscopic vision obtained through the system was found as a useful tool in intracranial aneurysm diagnosis and pre-operative 3D imaging. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Desk-based workers' perspectives on using sit-stand workstations: a qualitative analysis of the Stand@Work study

    NARCIS (Netherlands)

    Chau, J.Y.; Daley, M.; Srinivasan, A.; Dunn, S.; Bauman, A.E.; van der Ploeg, H.P.

    2014-01-01

    Background: Prolonged sitting time has been identified as a health risk factor. Sit-stand workstations allow desk workers to alternate between sitting and standing throughout the working day, but not much is known about their acceptability and feasibility. Hence, the aim of this study was to

  10. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  11. Contribution to the algorithmic and efficient programming of new parallel architectures including accelerators for neutron physics and shielding computations

    International Nuclear Information System (INIS)

    Dubois, J.

    2011-01-01

    In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [fr

  12. The safety monitor and RCM workstation as complementary tools in risk based maintenance optimization

    International Nuclear Information System (INIS)

    Rawson, P.D.

    2000-01-01

    Reliability Centred Maintenance (RCM) represents a proven technique for rendering maintenance activities safer, more effective, and less expensive, in terms of systems unavailability and resource management. However, it is believed that RCM can be enhanced by the additional consideration of operational plant risk. This paper discusses how two computer-based tools, i.e., the RCM Workstation and the Safety Monitor, can complement each other in helping to create a living preventive maintenance strategy. (author)

  13. Instrument workstation for the EGSE of the Near Infrared Spectro-Photometer instrument (NISP) of the EUCLID mission

    Science.gov (United States)

    Trifoglio, M.; Gianotti, F.; Conforti, V.; Franceschi, E.; Stephen, J. B.; Bulgarelli, A.; Fioretti, V.; Maiorano, E.; Nicastro, L.; Valenziano, L.; Zoli, A.; Auricchio, N.; Balestra, A.; Bonino, D.; Bonoli, C.; Bortoletto, F.; Capobianco, V.; Chiarusi, T.; Corcione, L.; Debei, S.; De Rosa, A.; Dusini, S.; Fornari, F.; Giacomini, F.; Guizzo, G. P.; Ligori, S.; Margiotta, A.; Mauri, N.; Medinaceli, E.; Morgante, G.; Patrizii, L.; Sirignano, C.; Sirri, G.; Sortino, F.; Stanco, L.; Tenti, M.

    2016-07-01

    The NISP instrument on board the Euclid ESA mission will be developed and tested at different levels of integration using various test equipment which shall be designed and procured through a collaborative and coordinated effort. The NISP Instrument Workstation (NI-IWS) will be part of the EGSE configuration that will support the NISP AIV/AIT activities from the NISP Warm Electronics level up to the launch of Euclid. One workstation is required for the NISP EQM/AVM, and a second one for the NISP FM. Each workstation will follow the respective NISP model after delivery to ESA for Payload and Satellite AIV/AIT and launch. At these levels the NI-IWS shall be configured as part of the Payload EGSE, the System EGSE, and the Launch EGSE, respectively. After launch, the NI-IWS will be also re-used in the Euclid Ground Segment in order to support the Commissioning and Performance Verification (CPV) phase, and for troubleshooting purposes during the operational phase. The NI-IWS is mainly aimed at the local storage in a suitable format of the NISP instrument data and metadata, at local retrieval, processing and display of the stored data for on-line instrument assessment, and at the remote retrieval of the stored data for off-line analysis on other computers. We describe the design of the IWS software that will create a suitable interface to the external systems in each of the various configurations envisaged at the different levels, and provide the capabilities required to monitor and verify the instrument functionalities and performance throughout all phases of the NISP lifetime.

  14. Evaluation of a PACS workstation for interpreting body CT studies

    International Nuclear Information System (INIS)

    Franken, E.A.; Berbaum, K.S.; Honda, H.; McGuire, C.; Weis, R.R.; Barloon, T.

    1989-01-01

    This paper reports conventional hard-copy images from 266 body CT studies compared with those provided by a picture archiving and communication system (PACS) workstation. PACS images were evaluated before and after use of various image processing features. Most cases were depicted equally well, but in about one-fourth of the cases, diagnostic features were shown more clearly on PACS images. When PACS images were viewed first, a change in diagnosis after subsequent hardcopy inspection was infrequent, but when hard-copy images were viewed first, the results were converse. The image processing features of PACS were critical for its superior performance. The ability of a PACS to provide both image display and manipulation results in the superiority of that system

  15. Can We Afford These Affordances? GarageBand and the Double-Edged Sword of the Digital Audio Workstation

    Science.gov (United States)

    Bell, Adam Patrick

    2015-01-01

    The proliferation of computers, tablets, and smartphones has resulted in digital audio workstations (DAWs) such as GarageBand in being some of the most widely distributed musical instruments. Positing that software designers are dictating the music education of DAW-dependent music-makers, I examine the fallacy that music-making applications such…

  16. Parallel Access of Out-Of-Core Dense Extendible Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Rotem, Doron

    2007-07-26

    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.

  17. An Adaptive Method For Texture Characterization In Medical Images Implemented on a Parallel Virtual Machine

    Directory of Open Access Journals (Sweden)

    Socrates A. Mylonas

    2003-06-01

    Full Text Available This paper describes the application of a new texture characterization algorithm for the segmentation of medical ultrasound images. The morphology of these images poses significant problems for the application of traditional image processing techniques and their analysis has been the subject of research for several years. The basis of the algorithm is an optimum signal modelling algorithm (Least Mean Squares-based, which estimates a set of parameters from small image regions. The algorithm has been converted to a structure suitable for implementation on a Parallel Virtual Machine (PVM consisting of a Network of Workstations (NoW, to improve processing speed. Tests were initially carried out on standard textured images. This paper describes preliminary results of the application of the algorithm in texture discrimination and segmentation of medical ultrasound images. The images examined are primarily used in the diagnosis of carotid plaques, which are linked to the risk of stroke.

  18. Energy consumption of workstations and external devices in school of business and information technology

    OpenAIRE

    Koret, Jere

    2012-01-01

    The purpose of this thesis was to measure energy consumption of workstations and external devices in School of Business and Information Technology and search for possible solutions to reduce electricity consumption. The commissionaire for the thesis was Oulu University of Applied Sciences School of Business and Information Management unit. The reason for the study is that School of Business and Information Management has a environmental plan which is based on ISO standard 14001 and this t...

  19. Pc-Based Floating Point Imaging Workstation

    Science.gov (United States)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  20. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  1. Comparison of radiant and convective cooling of office room: effect of workstation layout

    DEFF Research Database (Denmark)

    Bolashikov, Zhecho Dimitrov; Melikov, Arsen Krikor; Rezgals, Lauris

    2014-01-01

    and compared. The room was furnished with two workstations, two laptops and two thermal manikins resembling occupants. Two heat load levels, design (65 W/m2) and usual (39 W/m2), were generated by adding heat from warm panels simulating solar radiation. Two set-ups were studied: occupants sitting......The impact of heat source location (room layout) on the thermal environment generated in a double office room with four cooling ventilation systems - overhead ventilation, chilled ceiling with overhead ventilation, active chilled beam and active chilled beam with radiant panels was measured...

  2. A cycling workstation to facilitate physical activity in office settings.

    Science.gov (United States)

    Elmer, Steven J; Martin, James C

    2014-07-01

    Facilitating physical activity during the workday may help desk-bound workers reduce risks associated with sedentary behavior. We 1) evaluated the efficacy of a cycling workstation to increase energy expenditure while performing a typing task and 2) fabricated a power measurement system to determine the accuracy and reliability of an exercise cycle. Ten individuals performed 10 min trials of sitting while typing (SIT type) and pedaling while typing (PED type). Expired gases were recorded and typing performance was assessed. Metabolic cost during PED type was ∼ 2.5 × greater compared to SIT type (255 ± 14 vs. 100 ± 11 kcal h(-1), P physical activity without compromising typing performance. The exercise cycle's inaccuracy could be misleading to users. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Workstation computer systems for in-core fuel management

    International Nuclear Information System (INIS)

    Ciccone, L.; Casadei, A.L.

    1992-01-01

    The advancement of powerful engineering workstations has made it possible to have thermal-hydraulics and accident analysis computer programs operating efficiently with a significant performance/cost ratio compared to large mainframe computer. Today, nuclear utilities are acquiring independent engineering analysis capability for fuel management and safety analyses. Computer systems currently available to utility organizations vary widely thus requiring that this software be operational on a number of computer platforms. Recognizing these trends Westinghouse adopted a software development life cycle process for the software development activities which strictly controls the development, testing and qualification of design computer codes. In addition, software standards to ensure maximum portability were developed and implemented, including adherence to FORTRAN 77, and use of uniform system interface and auxiliary routines. A comprehensive test matrix was developed for each computer program to ensure that evolution of code versions preserves the licensing basis. In addition, the results of such test matrices establish the Quality Assurance basis and consistency for the same software operating on different computer platforms. (author). 4 figs

  4. Ergonomic Evaluations of Microgravity Workstations

    Science.gov (United States)

    Whitmore, Mihriban; Berman, Andrea H.; Byerly, Diane

    1996-01-01

    Various gloveboxes (GBXs) have been used aboard the Shuttle and ISS. Though the overall technical specifications are similar, each GBX's crew interface is unique. JSC conducted a series of ergonomic evaluations of the various glovebox designs to identify human factors requirements for new designs to provide operator commonality across different designs. We conducted 2 0g evaluations aboard the Shuttle to evaluate the material sciences GBX and the General Purpose Workstation (GPWS), and a KC-135 evaluation to compare combinations of arm hole interfaces and foot restraints (flexible arm holes were better than rigid ports for repetitive fine manipulation tasks). Posture analysis revealed that the smallest and tallest subjects assumed similar postures at all four configurations, suggesting that problematic postures are not necessarily a function of the operator s height but a function of the task characteristics. There was concern that the subjects were using the restrictive nature of the GBX s cuffs as an upper-body restraint to achieve such high forces, which might lead to neck/shoulder discomfort. EMG data revealed more consistent muscle performance at the GBX; the variability in the EMG profiles observed at the GPWS was attributed to the subjects attempts to provide more stabilization for themselves in the loose, flexible gauntlets. Tests revealed that the GBX should be designed for a 95 percentile American male to accommodate a neutral working posture. In addition, the foot restraint with knee support appeared beneficial for GBX operations. Crew comments were to provide 2 foot restraint mechanical modes, loose and lock-down, to accommodate a wide range of tasks without egressing the restraint system. Thus far, we have developed preliminary design guidelines for GBXs and foot.

  5. Differences in ergonomic and workstation factors between computer office workers with and without reported musculoskeletal pain.

    Science.gov (United States)

    Rodrigues, Mirela Sant'Ana; Leite, Raquel Descie Veraldi; Lelis, Cheila Maira; Chaves, Thaís Cristina

    2017-01-01

    Some studies have suggested a causal relationship between computer work and the development of musculoskeletal disorders. However, studies considering the use of specific tools to assess workplace ergonomics and psychosocial factors in computer office workers with and without reported musculoskeletal pain are scarce. The aim of this study was to compare the ergonomic, physical, and psychosocial factors in computer office workers with and without reported musculoskeletal pain (MSP). Thirty-five computer office workers (aged 18-55 years) participated in the study. The following evaluations were completed: Rapid Upper Limb Assessment (RULA), Rapid Office Strain Assessment (ROSA), and Maastricht Upper Extremity Questionnaire revised Brazilian Portuguese version (MUEQ-Br revised). Student t-tests were used to make comparisons between groups. The computer office workers were divided into two groups: workers with reported MSP (WMSP, n = 17) and workers without positive report (WOMSP, n = 18). Those in the WMSP group showed significantly greater mean values in the total ROSA score (WMSP: 6.71 [CI95% :6.20-7.21] and WOMSP: 5.88 [CI95% :5.37-6.39], p = 0.01). The WMSP group also showed higher scores in the chair section of the ROSA, workstation of MUEQ-Br revised, and in the upper limb RULA score. The chair height and armrest sections from ROSA showed the higher mean values in workers WMSP compared to workers WOMSP. A positive moderate correlation was observed between ROSA and RULA total scores (R = 0.63, p ergonomics indexes for chair workstation and worse physical risk related to upper limb (RULA upper limb section) than workers without pain. However, there were no observed differences in workers with and without MSP regarding work-related psychosocial factors. The results suggest that inadequate workstation conditions, specifically the chair height, arm and back rest, are linked to improper upper limb postures and that these factors are contributing to

  6. Advanced software development workstation project: Engineering scripting language. Graphical editor

    Science.gov (United States)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  7. The use of mapped network drive to enhance the availability and accessibility of patient archive in ge-elscint Xpert workstations

    International Nuclear Information System (INIS)

    Chau; Kam Hung

    2004-01-01

    Purpose: To enhance the search and retrieval of patients' past record for clinical purpose. Methods: In the past, patient data was stored by MOD (Magnetic Optical Disk - Write Many Read Many). Its capacity is 327 MByte per side (654 MByte Total). In our department, each disk take up around 10 days of study. To search for patients' past record, we need to pick up a particular disk from a pile of MODs. With the advent of high speed network and high capacity hard disk, we have developed a method to put patients' past record on line so that the search was made easier and faster. The GE-Elscint Xpert Workstation is running on OS/2 Warp Connect Version 3 with inherent TCP/IP support. The current application is Apex Xpert Version 5.13. The Central Archive Unit is a 9 Gbyte Hard Disk which limited the on-line capacity of around 6 months. The Workstation was connected to a Windows NT based PC on the network. A folder was created and shared for network access. To make resources sharing possible. We installed OS/2Warp Connect software - IBM Peer for OS/2 on the Xpert Workstation. In addition, the Xpert based system was configured to be able to initiate communication with remote TCP/IP and ApexNet nodes. On the Xpert Workstation, Ne apply drive mapping by assigning a drive letter to the remote share located on the Windows NT based PC. To map a shared folder to a drive letter, specify the drive letter and the share's name on the Net Use command. For example, net use s: //WinNTPC /DataFilesmaps the DataFiles share on the WinNTPC computer to drive S. If any folder in the share's path contains a space, you must put the entire path, including the opening double backslash (//), in quotation marks. Connections to NT resources can also be made through the OS/2 network browser, the Sharing and Connecting Graphic User Interface on the Desktop. An active user ID and password authorizes the user to browse and use shared drives. With this mapped drive, we can copy patient record to it. The

  8. Defining the best parallelization strategy for a diphasic compressible fluid mechanics code

    International Nuclear Information System (INIS)

    Berthou, Jean-Yves; Fayolle, Eric; Faucher, Eric; Scliffet, Laurent

    2000-01-01

    Nuclear plants use steam generator safety valves in order to regulate possible large pressure variations of fluids. In case of an incident these valves may be fed with pressurized liquid water (for instance a pressure of 9 MPa at a temperature of 300degC). When a pressurized liquid is submitted to a strong pressure drop, it will start evaporating. This phenomena is called flashing. Z. Bilicki and co-authors proposed the homogeneous relaxation model (HRM) to compute critical flashing water flows. Its computation in the case of non stationary one-dimensional flashing flows has been carried out with the development of a dedicated time dependent Finite Volume scheme based on a simplified version of the Godunov approach. Electricite De France Research and Development division have developed a monodimensional implementation of the HRM model: ECOSS, a 11000 lines FORTRAN 90. Applied to a shock tube test case with a 20000 elements monodimensional mesh, the simulation of the physical phenomenon during 2.5 seconds requires at least 100 days of computation on a SUN Sparc-Ultra60. This execution time justifies the ECOSS parallelization. Furthermore, we plan a modeling on 2D meshes for the next few years. Knowing that multiplying the mesh dimension by a factor 10 multiplies the execution time by a factor 100, ECOSS would take years of computation with small 2D meshes (1000 x 1000) on a conventional workstation. This paper describes the parallelization analysis we have conducted and we presents the experimental results we have obtained applying different programming model (MPI, OpenMP, HPF) on various platforms (a Compaq Proliant 6000 4 processors, a Cray T3E-750 300 processors, a HP class V 16 processors, a SGI Origin2000 32 processors, a cluster of PCs and a COMPAQ SC 232 processors). These experimental results will be discussed according to the following criteria: efficiency, salability, maintainability, developing costs and portability. As a conclusion, we will present the

  9. Defining the best parallelization strategy for a diphasic compressible fluid mechanics code

    Energy Technology Data Exchange (ETDEWEB)

    Berthou, Jean-Yves; Fayolle, Eric [Electricite de France, Research and Development division, Modeling and Information Technologies Department, CLAMART CEDEX (France); Faucher, Eric; Scliffet, Laurent [Electricite de France, Research and Development Division, Mechanics and Component Technology Branch Department, Moret sur Loing (France)

    2000-09-01

    Nuclear plants use steam generator safety valves in order to regulate possible large pressure variations of fluids. In case of an incident these valves may be fed with pressurized liquid water (for instance a pressure of 9 MPa at a temperature of 300degC). When a pressurized liquid is submitted to a strong pressure drop, it will start evaporating. This phenomena is called flashing. Z. Bilicki and co-authors proposed the homogeneous relaxation model (HRM) to compute critical flashing water flows. Its computation in the case of non stationary one-dimensional flashing flows has been carried out with the development of a dedicated time dependent Finite Volume scheme based on a simplified version of the Godunov approach. Electricite De France Research and Development division have developed a monodimensional implementation of the HRM model: ECOSS, a 11000 lines FORTRAN 90. Applied to a shock tube test case with a 20000 elements monodimensional mesh, the simulation of the physical phenomenon during 2.5 seconds requires at least 100 days of computation on a SUN Sparc-Ultra60. This execution time justifies the ECOSS parallelization. Furthermore, we plan a modeling on 2D meshes for the next few years. Knowing that multiplying the mesh dimension by a factor 10 multiplies the execution time by a factor 100, ECOSS would take years of computation with small 2D meshes (1000 x 1000) on a conventional workstation. This paper describes the parallelization analysis we have conducted and we presents the experimental results we have obtained applying different programming model (MPI, OpenMP, HPF) on various platforms (a Compaq Proliant 6000 4 processors, a Cray T3E-750 300 processors, a HP class V 16 processors, a SGI Origin2000 32 processors, a cluster of PCs and a COMPAQ SC 232 processors). These experimental results will be discussed according to the following criteria: efficiency, salability, maintainability, developing costs and portability. As a conclusion, we will present the

  10. Using Mosix for Wide-Area Compuational Resources

    Science.gov (United States)

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  11. Evaluation of radiological workstations and web-browser-based image distribution clients for a PACS project in hands-on workshops

    International Nuclear Information System (INIS)

    Boehm, Thomas; Handgraetinger, Oliver; Voellmy, Daniel R.; Marincek, Borut; Wildermuth, Simon; Link, Juergen; Ploner, Ricardo

    2004-01-01

    The methodology and outcome of a hands-on workshop for the evaluation of PACS (picture archiving and communication system) software for a multihospital PACS project are described. The following radiological workstations and web-browser-based image distribution software clients were evaluated as part of a multistep evaluation of PACS vendors in March 2001: Impax DS 3000 V 4.1/Impax Web1000 (Agfa-Gevaert, Mortsel, Belgium); PathSpeed V 8.0/PathSpeed Web (GE Medical Systems, Milwaukee, Wis., USA); ID Report/ID Web (Image Devices, Idstein, Germany); EasyVision DX/EasyWeb (Philips Medical Systems, Eindhoven, Netherlands); and MagicView 1000 VB33a/MagicWeb (Siemens Medical Systems, Erlangen, Germany). A set of anonymized DICOM test data was provided to enable direct image comparison. Radiologists (n=44) evaluated the radiological workstations and nonradiologists (n=53) evaluated the image distribution software clients using different questionnaires. One vendor was not able to import the provided DICOM data set. Another vendor had problems in displaying imported cross-sectional studies in the correct stack order. Three vendors (Agfa-Gevaert, GE, Philips) presented server-client solutions with web access. Two (Siemens, Image Devices) presented stand-alone solutions. The highest scores in the class of radiological workstations were achieved by ID Report from Image Devices (p<0.005). In the class of image distribution clients, the differences were statistically not significant. Questionnaire-based evaluation was shown to be useful for guaranteeing systematic assessment. The workshop was a great success in raising interest in the PACS project in a large group of future clinical users. The methodology used in the present study may be useful for other hospitals evaluating PACS. (orig.)

  12. Development of a new discharge control system utilizing UNIX workstations and VME-bus systems for JT-60

    Energy Technology Data Exchange (ETDEWEB)

    Akasaka, Hiromi; Sueoka, Michiharu; Takano, Shoji; Totsuka, Toshiyuki; Yonekawa, Izuru; Kurihara, Kenichi; Kimura, Toyoaki [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2002-01-01

    The JT-60 discharge control system, which had used HIDIC-80E 16 bit mini-computers and CAMAC systems since the start of JT-60 experiment in 1985, was renewed in March, 2001. The new system consists of a UNIX workstation and a VME-bus system, and features a distributed control system. The workstation performs message communication with a VME-bus system and controllers of JT-60 sub-systems and processing for discharge control because of its flexibility to construction of a new network and modifications of software. The VME-bus system performs discharge sequence control because it is suitable for fast real time control and flexible to the hardware extension. The replacement has improved the control function and reliability of the discharge control system and also has provided sufficient performance necessary for future modifications of JT-60. The new system has been running successfully since April 2001. The data acquisition speed was confirmed to be twice faster than the previous one. This report describes major functions of the discharge control system, technical ideas for developing the system and results of the initial operation in detail. (author)

  13. The effects of FreeSurfer version, workstation type, and Macintosh operating system version on anatomical volume and cortical thickness measurements.

    Science.gov (United States)

    Gronenschild, Ed H B M; Habets, Petra; Jacobs, Heidi I L; Mengelers, Ron; Rozendaal, Nico; van Os, Jim; Marcelis, Machteld

    2012-01-01

    FreeSurfer is a popular software package to measure cortical thickness and volume of neuroanatomical structures. However, little if any is known about measurement reliability across various data processing conditions. Using a set of 30 anatomical T1-weighted 3T MRI scans, we investigated the effects of data processing variables such as FreeSurfer version (v4.3.1, v4.5.0, and v5.0.0), workstation (Macintosh and Hewlett-Packard), and Macintosh operating system version (OSX 10.5 and OSX 10.6). Significant differences were revealed between FreeSurfer version v5.0.0 and the two earlier versions. These differences were on average 8.8 ± 6.6% (range 1.3-64.0%) (volume) and 2.8 ± 1.3% (1.1-7.7%) (cortical thickness). About a factor two smaller differences were detected between Macintosh and Hewlett-Packard workstations and between OSX 10.5 and OSX 10.6. The observed differences are similar in magnitude as effect sizes reported in accuracy evaluations and neurodegenerative studies.The main conclusion is that in the context of an ongoing study, users are discouraged to update to a new major release of either FreeSurfer or operating system or to switch to a different type of workstation without repeating the analysis; results thus give a quantitative support to successive recommendations stated by FreeSurfer developers over the years. Moreover, in view of the large and significant cross-version differences, it is concluded that formal assessment of the accuracy of FreeSurfer is desirable.

  14. The effects of FreeSurfer version, workstation type, and Macintosh operating system version on anatomical volume and cortical thickness measurements.

    Directory of Open Access Journals (Sweden)

    Ed H B M Gronenschild

    Full Text Available FreeSurfer is a popular software package to measure cortical thickness and volume of neuroanatomical structures. However, little if any is known about measurement reliability across various data processing conditions. Using a set of 30 anatomical T1-weighted 3T MRI scans, we investigated the effects of data processing variables such as FreeSurfer version (v4.3.1, v4.5.0, and v5.0.0, workstation (Macintosh and Hewlett-Packard, and Macintosh operating system version (OSX 10.5 and OSX 10.6. Significant differences were revealed between FreeSurfer version v5.0.0 and the two earlier versions. These differences were on average 8.8 ± 6.6% (range 1.3-64.0% (volume and 2.8 ± 1.3% (1.1-7.7% (cortical thickness. About a factor two smaller differences were detected between Macintosh and Hewlett-Packard workstations and between OSX 10.5 and OSX 10.6. The observed differences are similar in magnitude as effect sizes reported in accuracy evaluations and neurodegenerative studies.The main conclusion is that in the context of an ongoing study, users are discouraged to update to a new major release of either FreeSurfer or operating system or to switch to a different type of workstation without repeating the analysis; results thus give a quantitative support to successive recommendations stated by FreeSurfer developers over the years. Moreover, in view of the large and significant cross-version differences, it is concluded that formal assessment of the accuracy of FreeSurfer is desirable.

  15. Computer-aided diagnosis workstation and database system for chest diagnosis based on multi-helical CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru; Sasagawa, Michizou

    2006-03-01

    Multi-helical CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system. The results of this study indicate that our computer-aided diagnosis workstation and network system can increase diagnostic speed, diagnostic accuracy and safety of medical information.

  16. Nuclear power plant simulation using advanced simulation codes through a state-of-the-art workstation

    International Nuclear Information System (INIS)

    Laats, E.T.; Hagen, R.N.

    1985-01-01

    The Nuclear Plant Analyzer (NPA) currently resides in a Control Data Corporation 176 mainframe computer at the Idaho National Engineering Laboratory (INEL). The NPA user community is expanding to include worldwide users who cannot consistently access the INEL mainframe computer from their own facilities. Thus, an alternate mechanism is needed to enable their use of the NPA. Therefore, a feasibility study was undertaken by EG and G Idaho to evaluate the possibility of developing a standalone workstation dedicated to the NPA

  17. FY1995 next generation highly parallel database / dataminig server using 100 PC's and ATM switch; 1995 nendo tasudai no pasokon wo ATM ketsugoshita jisedai choheiretsu database mining server no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The objective of the research is first to build a highly parallel processing system using 100 personal computers and an ATM switch. The former is a commodity for computer, while the latter can be regarded as a commodity for future communication systems. Second is to implement parallel relational database management system and parallel data mining system over the 100-PC cluster system. Third is to run decision-support queries typicalto data warehouses, to run association rule mining, and to prove the effectiveness of the proposed architecture as a next generation parallel database/datamining server. Performance/cost ratio of PC is significantly improved compared with workstations and proprietry systems due to its mass production. The cost of ATM switch is also considerably decreasing since ATM is being widely accepted as a communication-on infrastructure. By combining 100 PCs as computing commodities and ATM switch as a communication commodity, we built large sca-le parallel processing system inexpensively. Each mode employs the Pentium Pro CPU and the communication badwidth between PC's is more than 120Mbits/sec. A new parallel relational DBMS is design-ed and implemented. TPC-D, which is a standard benchmark for decision support applicants (100GBytes) is executed. Our system attained much higher performance than current commercial systems which are also much more expensive than ours. In addition, we developed a novel parallel data mining algorithm to extract associate rules. We implemented it in our system and succeeded toattain high performance. Thus it is verified that ATM connected PC cluster is very promising as a next generation platform for large scale database/dataminig server. (NEDO)

  18. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  19. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  20. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  1. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  2. Test-retest reliability and concurrent validity of a web-based questionnaire measuring workstation and individual correlates of work postures during computer work

    NARCIS (Netherlands)

    IJmker, S.; Mikkers, J.; Blatter, B.M.; Beek, A.J. van der; Mechelen, W. van; Bongers, P.M.

    2008-01-01

    Introduction: "Ergonomic" questionnaires are widely used in epidemiological field studies to study the association between workstation characteristics, work posture and musculoskeletal disorders among office workers. Findings have been inconsistent regarding the putative adverse effect of work

  3. Methodological Aspects of Modelling and Simulation of Robotized Workstations

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2018-05-01

    Full Text Available From the point of view of development of application and program products, key directions that need to be respected in computer support for project activities are quite clearly specified. User interfaces with a high degree of graphical interactive convenience, two-dimensional and three-dimensional computer graphics contribute greatly to streamlining project methodologies and procedures in particular. This is mainly due to the fact that a high number of solved tasks is clearly graphic in the modern design of robotic systems. Automation of graphical character tasks is therefore a significant development direction for the subject area. The authors present results of their research in the area of automation and computer-aided design of robotized systems. A new methodical approach to modelling robotic workstations, consisting of ten steps incorporated into the four phases of the logistics process of creating and implementing a robotic workplace, is presented. The emphasis is placed on the modelling and simulation phase with verification of elaborated methodologies on specific projects or elements of the robotized welding plant in automotive production.

  4. Out of Hours Emergency Computed Tomography Brain Studies: Comparison of Standard 3 Megapixel Diagnostic Workstation Monitors With the iPad 2.

    Science.gov (United States)

    Salati, Umer; Leong, Sum; Donnellan, John; Kok, Hong Kuan; Buckley, Orla; Torreggiani, William

    2015-11-01

    The purpose was to compare performance of diagnostic workstation monitors and the Apple iPad 2 (Cupertino, CA) in interpretation of emergency computed tomography (CT) brain studies. Two experienced radiologists interpreted 100 random emergency CT brain studies on both on-site diagnostic workstation monitors and the iPad 2 via remote access. The radiologists were blinded to patient clinical details and to each other's interpretation and the study list was randomized between interpretations on different modalities. Interobserver agreement between radiologists and intraobserver agreement between modalities was determined and Cohen kappa coefficients calculated for each. Performance with regards to urgent and nonurgent abnormalities was assessed separately. There was substantial intraobserver agreement of both radiologists between the modalities with overall calculated kappa values of 0.959 and 0.940 in detecting acute abnormalities and perfect agreement with regards to hemorrhage. Intraobserver agreement kappa values were 0.939 and 0.860 for nonurgent abnormalities. Interobserver agreement between the 2 radiologists for both diagnostic monitors and the iPad 2 was also substantial ranging from 0.821-0.860. The iPad 2 is a reliable modality in the interpretation of CT brain studies in them emergency setting and for the detection of acute and chronic abnormalities, with comparable performance to standard diagnostic workstation monitors. Copyright © 2015 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  5. Office ergonomics training and a sit-stand workstation: effects on musculoskeletal and visual symptoms and performance of office workers.

    Science.gov (United States)

    Robertson, Michelle M; Ciriello, Vincent M; Garabet, Angela M

    2013-01-01

    Work Related Musculoskeletal Disorders (WMSDs) among office workers with intensive computer use is widespread and the prevalence of symptoms is growing. This randomized controlled trial investigated the effects of an office ergonomics training combined with a sit-stand workstation on musculoskeletal and visual discomfort, behaviors and performance. Participants performed a lab-based customer service job for 8 h per day, over 15 days and were assigned to: Ergonomics Trained (n = 11) or Minimally Trained (n = 11). The training consisted of: a 1.5-h interactive instruction, a sit/stand practice period, and ergonomic reminders. Ergonomics Trained participants experienced minimal musculoskeletal and visual discomfort across the 15 days, varied their postures, with significantly higher performance compared to the Minimally Trained group who had a significantly higher number of symptoms, suggesting that training plays a critical role. The ability to mitigate symptoms, change behaviors and enhance performance through training combined with a sit-stand workstation has implications for preventing discomforts in office workers. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. Introducing heterogeneity in Monte Carlo models for risk assessments of high-level nuclear waste. A parallel implementation of the MLCRYSTAL code

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, M.

    1996-09-01

    We have introduced heterogeneity to an existing model as a special feature and simultaneously extended the model from 1D to 3D. Briefly, the code generates stochastic fractures in a given geosphere. These fractures are connected in series to form one pathway for radionuclide transport from the repository to the biosphere. Rock heterogeneity is realized by simulating physical and chemical properties for each fracture, i.e. these properties vary along the transport pathway (which is an ensemble of all fractures serially connected). In this case, each Monte Carlo simulation involves a set of many thousands of realizations, one for each pathway. Each pathway can be formed by approx. 100 fractures. This means that for a Monte Carlo simulation of 1000 realizations, we need to perform a total of 100,000 simulations. Therefore the introduction of heterogeneity has increased the CPU demands by two orders of magnitude. To overcome the demand for CPU, the program, MLCRYSTAL, has been implemented in a parallel workstation environment using the MPI, Message Passing Interface, and later on ported to an IBM-SP2 parallel supercomputer. The program is presented here and a preliminary set of results is given with the conclusions that can be drawn. 3 refs, 12 figs.

  7. Introducing heterogeneity in Monte Carlo models for risk assessments of high-level nuclear waste. A parallel implementation of the MLCRYSTAL code

    International Nuclear Information System (INIS)

    Andersson, M.

    1996-09-01

    We have introduced heterogeneity to an existing model as a special feature and simultaneously extended the model from 1D to 3D. Briefly, the code generates stochastic fractures in a given geosphere. These fractures are connected in series to form one pathway for radionuclide transport from the repository to the biosphere. Rock heterogeneity is realized by simulating physical and chemical properties for each fracture, i.e. these properties vary along the transport pathway (which is an ensemble of all fractures serially connected). In this case, each Monte Carlo simulation involves a set of many thousands of realizations, one for each pathway. Each pathway can be formed by approx. 100 fractures. This means that for a Monte Carlo simulation of 1000 realizations, we need to perform a total of 100,000 simulations. Therefore the introduction of heterogeneity has increased the CPU demands by two orders of magnitude. To overcome the demand for CPU, the program, MLCRYSTAL, has been implemented in a parallel workstation environment using the MPI, Message Passing Interface, and later on ported to an IBM-SP2 parallel supercomputer. The program is presented here and a preliminary set of results is given with the conclusions that can be drawn. 3 refs, 12 figs

  8. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  9. FY1995 next generation highly parallel database / dataminig server using 100 PC's and ATM switch; 1995 nendo tasudai no pasokon wo ATM ketsugoshita jisedai choheiretsu database mining server no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The objective of the research is first to build a highly parallel processing system using 100 personal computers and an ATM switch. The former is a commodity for computer, while the latter can be regarded as a commodity for future communication systems. Second is to implement parallel relational database management system and parallel data mining system over the 100-PC cluster system. Third is to run decision-support queries typicalto data warehouses, to run association rule mining, and to prove the effectiveness of the proposed architecture as a next generation parallel database/datamining server. Performance/cost ratio of PC is significantly improved compared with workstations and proprietry systems due to its mass production. The cost of ATM switch is also considerably decreasing since ATM is being widely accepted as a communication-on infrastructure. By combining 100 PCs as computing commodities and ATM switch as a communication commodity, we built large sca-le parallel processing system inexpensively. Each mode employs the Pentium Pro CPU and the communication badwidth between PC's is more than 120Mbits/sec. A new parallel relational DBMS is design-ed and implemented. TPC-D, which is a standard benchmark for decision support applicants (100GBytes) is executed. Our system attained much higher performance than current commercial systems which are also much more expensive than ours. In addition, we developed a novel parallel data mining algorithm to extract associate rules. We implemented it in our system and succeeded toattain high performance. Thus it is verified that ATM connected PC cluster is very promising as a next generation platform for large scale database/dataminig server. (NEDO)

  10. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  11. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  12. Feasibility of an Integrated Expert Video Authoring Workstation for Low-Cost Teacher Produced CBI. SBIR Phase I: Final Report.

    Science.gov (United States)

    IntelliSys, Inc., Syracuse, NY.

    This was Phase I of a three-phased project. This phase of the project investigated the feasibility of a computer-based instruction (CBI) workstation, designed for use by teachers of handicapped students within a school structure. This station is to have as a major feature the ability to produce in-house full-motion video using one of the…

  13. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  14. EPRI root cause advisory workstation 'ERCAWS'

    International Nuclear Information System (INIS)

    Singh, A.; Chiu, C.; Hackman, R.B.

    1993-01-01

    EPRI and its contractor FPI International are developing Personal Computer (PC), Microsoft Windows based software to assist power plant engineers and maintenance personnel to diagnose and correct root causes of power plant equipment failures. The EPRI Root Cause Advisory Workstation (ERCAWS) is easy to use and able to handle knowledge bases and diagnostic tools for an unlimited number of equipment types. Knowledge base data is based on power industry experience and root cause analysis from many sources - Utilities, EPRI, US government, FPI, and International sources. The approach used in the knowledge base handling portion of the software is case-study oriented with the engineer selecting the equipment type and symptom identification using a combination of text, photographs, and animation, displaying dynamic physical phenomena involved. Root causes, means for confirmation, and corrective actions are then suggested in a simple, user friendly format. The first knowledge base being released with ERCAWS is the Valve Diagnostic Advisor module; covering six common valve types and some motor operator and air operator items. More modules are under development with Heat Exchanger, Bolt, and Piping modules currently in the beta testing stage. A wide variety of diagnostic tools are easily incorporated into ERCAWS and accessed through the main screen interface. ERCAWS is designed to fulfill the industry need for user-friendly tools to perform power plant equipment failure root cause analysis, and training for engineering, operations and maintenance personnel on how components can fail and how to reduce failure rates or prevent failure from occurring. In addition, ERCAWS serves as a vehicle to capture lessons learned from industry wide experience. (author)

  15. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  16. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  17. Modeling, realization and evaluation of a parallel architecture for the data acquisition in multidetectors

    International Nuclear Information System (INIS)

    Guirande, Ph.; Aleonard, M-M.; Dien, Q-T.; Pedroza, J-L.

    1997-01-01

    The efficiency increasing in four π (EUROGAM, EUROBALL, DIAMANT) is achieved by an increase in the granularity, hence in the event counting rate in the acquisition system. Consequently, an evolution of the architecture of readout systems, coding and software is necessary. To achieve the required evaluation we have implemented a parallel architecture to check the quality of the events. The first application of this architecture was to make available an improved data acquisition system for the DIAMANT multidetector. The data acquisition system of DIAMANT is based on an ensemble of VME cards which must manage: the event readout, their salvation on magnetic support and histogram construction. The ensemble consists of processors distributed in a net, a workstation to control the experiment and a display system for spectra and arrays. In such architecture the task of VME bus becomes quickly a limitation for performances not only for the data transfer but also for coordination of different processors. The parallel architecture used makes the VME bus operation easy. It is based on three DSP C40 (Digital Signal Processor) implanted in a commercial (LSI) VME. It is provided with an external bus used to read the raw data from an interface card (ROCVI) between the 32 bit ECL bus reading the real time VME-based encoders. The performed tests have evidenced jamming after data exchanges between the processors using two communication lines. The analysis of this problem has indicated the necessity of dynamical changes of tasks to avoid this blocking. Intrinsic evaluation (i.e. without transfer on the VME bus) has been carried out for two parallel topologies (processor farm and tree). The simulation software permitted the generation of event packets. The obtained rates are sensibly equivalent (6 Mo/s) independent of topology. The farm topology has been chosen because it is simple to implant. The charge evaluation has reduced the rate in 'simplex' communication mode to 5.3 Mo/s and

  18. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  19. The effectiveness of sit-stand workstations for changing office workers' sitting time: results from the Stand@Work randomized controlled trial pilot

    NARCIS (Netherlands)

    Chau, J.Y.; Daley, M.; Dunn, S.; Srinivasan, A.; Do, A.; Bauman, A.E.; van der Ploeg, H.P.

    2014-01-01

    Prolonged sitting time is detrimental for health. Individuals with desk-based occupations tend to sit a great deal and sit-stand workstations have been identified as a potential strategy to reduce sitting time. Hence, the objective of the current study was to examine the effects of using sit-stand

  20. Semmelweis revisited: hand hygiene and nosocomial disease transmission in the anesthesia workstation.

    Science.gov (United States)

    Biddle, Chuck

    2009-06-01

    Hospital-acquired infections occur at an alarmingly high frequency, possibly affecting as many as 1 in 10 patients, resulting in a staggering morbidity and an annual mortality of many tens of thousands of patients. Appropriate hand hygiene is highly effective and represents the simplest approach that we have to preventing nosocomial infections. The Agency for Healthcare Research and Quality has targeted hand-washing compliance as a top research agenda item for patient safety. Recent research has identified inadequate hand washing and contaminated anesthesia workstation issues as likely contributors to nosocomial infections, finding aseptic practices highly variable among providers. It is vital that all healthcare providers, including anesthesia providers, appreciate the role of inadequate hand hygiene in nosocomial infection and meticulously follow the mandates of the American Association of Nurse Anesthetists and other professional healthcare organizations.

  1. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  2. Development of a data acquisition system using a RISC/UNIXTM workstation

    International Nuclear Information System (INIS)

    Takeuchi, Y.; Tanimori, T.; Yasu, Y.

    1993-01-01

    We have developed a compact data acquisition system on RISC/UNIX workstations. A SUN TM SPARCstation TM IPC was used, in which an extension bus 'SBus TM ' was linked to a VMEbus. The transfer rate achieved was better than 7 Mbyte/s between the VMEbus and the SUN. A device driver for CAMAC was developed in order to realize an interruptive feature in UNIX. In addition, list processing has been incorporated in order to keep the high priority of the data handling process in UNIX. The successful developments of both device driver and list processing have made it possible to realize the good real-time feature on the RISC/UNIX system. Based on this architecture, a portable and versatile data taking system has been developed, which consists of a graphical user interface, I/O handler, user analysis process, process manager and a CAMAC device driver. (orig.)

  3. Feedwater heater performance evaluation using the heat exchanger workstation

    International Nuclear Information System (INIS)

    Ranganathan, K.M.; Singh, G.P.; Tsou, J.L.

    1995-01-01

    A Heat Exchanger Workstation (HEW) has been developed to monitor the condition of heat exchanging equipment power plants. HEW enables engineers to analyze thermal performance and failure events for power plant feedwater heaters. The software provides tools for heat balance calculation and performance analysis. It also contains an expert system that enables performance enhancement. The Operation and Maintenance (O ampersand M) reference module on CD-ROM for HEW will be available by the end of 1995. Future developments of HEW would result in Condenser Expert System (CONES) and Balance of Plant Expert System (BOPES). HEW consists of five tightly integrated applications: A Database system for heat exchanger data storage, a Diagrammer system for creating plant heat exchanger schematics and data display, a Performance Analyst system for analyzing and predicting heat exchanger performance, a Performance Advisor expert system for expertise on improving heat exchanger performance and a Water Calculator system for computing properties of steam and water. In this paper an analysis of a feedwater heater which has been off-line is used to demonstrate how HEW can analyze the performance of the feedwater heater train and provide an economic justification for either replacing or repairing the feedwater heater

  4. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  5. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  6. The effect of different supermarket checkout workstations on trunk kinematics of checkout operators O efeito de diferentes modelos de checkout na cinemática de operadores de supermercado

    Directory of Open Access Journals (Sweden)

    André L. F. Rodacki

    2010-02-01

    Full Text Available OBJECTIVES: This study analyzed the effect of a standard and a modified checkout workstation during a simulated task on trunk postures of a supermarket checkout operator. METHODS: Eight participants performed a task involving grasping, scanning and depositing products, while 3D images of the trunk were collected. RESULTS: A number of kinematic changes were observed in trunk posture. A greater anterior flexion (3.0±1.2º and lateral bending during grasping (7.1±1.4º were found in the standard checkout workstation when compared to the modified model (p0.05. DISCUSSION: The modified checkout workstation provided less lateral bending of the trunk to grasp products (8.1º ± 2.8; p0.05, irrespective of the checkout workstations (p>0.05. The modified checkout workstation successfully reduced risk of injury in some aspects, particularly the problems associated with lateral bending of the trunk. Other studies are required to test whether such potential benefits are obtained on a daily basis. CONCLUSIONS: Supermarket checkout operators may be at high risk of occupational injury due to different workstation demands. Modifications to checkout workstation design are an attractive possibility to reduce postural stress and fatigue in checkout operators. Longitudinal studies are required to test whether changes observed in the present study are sustained in the long term.OBJETIVOS: Analisar o efeito de um modelo padrão e de um modificado de checkout durante uma tarefa simulada de um operador de caixa de supermercado. MÉTODOS: Oito participantes desempenharam uma tarefa envolvendo apanhar, ler e depositar produtos, enquanto imagens 3D do tronco foram coletadas. RESULTADOS: Um número de mudanças cinemáticas foram observadas na postura do tronco. Uma maior flexão anterior (3.0±1.2º e uma inclinação lateral durante o apanhar (7.1±1.4º foram encontradas no checkout padrão quando comparadas ao modelo modificado (p0.05. DISCUSSÃO: O checkout

  7. A new workstation based man/machine interface system for the JT-60 Upgrade

    International Nuclear Information System (INIS)

    Yonekawa, I.; Shimono, M.; Totsuka, T.; Yamagishi, K.

    1992-01-01

    Development of a new man/machine interface system was stimulated by the requirements of making the JT-60 operator interface more 'friendly' on the basis of the past five-year operational experience. Eleven Sun/3 workstations and their supervisory mini-computer HIDIC V90/45 are connected through the standard network; Ethernet. The network is also connected to the existing 'ZENKEI' mini-computer system through the shared memory on the HIDIC V90/45 mini-computer. Improved software, such as automatic setting of the discharge conditions, consistency check among the related parameters and easy operation for discharge result data display, offered the 'user-friendly' environments. This new man/machine interface system leads to the efficient operation of the JT-60. (author)

  8. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  9. How to Protect Patients Digital Images/Thermograms Stored on a Local Workstation

    Directory of Open Access Journals (Sweden)

    J. Živčák

    2010-01-01

    Full Text Available To ensure the security and privacy of patient electronic medical information stored on local workstations in doctors’ offices, clinic centers, etc., it is necessary to implement a secure and reliable method for logging on and accessing this information. Biometrically-based identification technologies use measurable personal properties (physiological or behavioral such as a fingerprint in order to identify or verify a person’s identity, and provide the foundation for highly secure personal identification, verification and/or authentication solutions. The use of biometric devices (fingerprint readers is an easy and secure way to log on to the system. We have provided practical tests on HP notebooks that have the fingerprint reader integrated. Successful/failed logons have been monitored and analyzed, and calculations have been made. This paper presents the false rejection rates, false acceptance rates and failure to acquire rates.

  10. Development of Neutron and Photon Shielding Calculation System for Workstation (NPSS-W)

    International Nuclear Information System (INIS)

    Shimizu, Yoshio; Nojiri, Ichiro; Odajima, Akira; Sasaki, Toshihisa; Kurosawa, Naohiro

    1998-01-01

    In plant designs and safety evaluations of nuclear fuel cycle facilities, it is important to evaluate the direct radiation and the skyshine (air-scattered photon radiation) from facilities reasonably. The Neutron and Photon Shielding Calculation System for Workstation (NPSS-W) was developed. The NPSS-W can carry out the shielding calculations of the photon and the neutron easily and rapidly. The NPSS-W can easily calculate the radiation source intensity by ORIGEN-S and the dose equivalent rate by SN transport calculational codes, which are ANISN and DOT3.5. The NPSS-W consists of five modules, which named CAL1, CAL2, CAL3, CAL4, CAL5). Some kinds of shielding calculational systems are calculated. The user's manual of NPSS-W, the examples of calculations for each module and the output data are appended. (author)

  11. The integrated workstation, a realtime data acquisition, analysis and display system

    International Nuclear Information System (INIS)

    Treadway, T.R. III.

    1991-05-01

    The Integrated Workstation was developed at Lawrence Livermore National Laboratory to consolidate the data from many widely dispersed systems in order to provide an overall indication of the enrichment performance of the Atomic Vapor Laser Isotope Separation experiments. In order to accomplish this task a Hewlett Packard 9000/835 turboSRX was employed to acquire over 150 analog input signals. Following the data acquisition, a spreadsheet-type analysis package and interpreter was used to derive 300 additional values. These values were the results of applying physics models to the raw data. Following the calculations were plotted and archived for post-run analysis and report generation. Both the modeling calculations, and real-time plot configurations can be dynamically reconfigured as needed. Typical sustained data acquisition and display rates of the system was 1 Hz. However rates exceeding 2.5 Hz have been obtained. This paper will discuss the instrumentation, architecture, implementation, usage, and results of this system in a set of experiments that occurred in 1989. 2 figs

  12. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  13. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  14. Porting of a serial molecular dynamics code on MIMD platforms

    Energy Technology Data Exchange (ETDEWEB)

    Celino, M. [ENEA Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). HPCN Project

    1999-07-01

    A molecular dynamics (MD) code, utilized for the study of atomistic models of metallic systems has been parallelized for MIMD (multiple instructions multiple data) parallel platforms by means of the parallel virtual machine (PVM) message passing library. Since the parallelization implies modifications of the sequential algorithms, these are described from the point of view of the statistical mechanical theory. Furthermore, techniques and parallelization strategies utilized and the MD parallel code are described in detail. Benchmarks on several MIMD platforms (IBM SP1, SP2, Cray T3D, cluster of workstations) allow performances evaluation of the code versus the different characteristics of the parallel platforms. [Italian] Un codice seriale di dinamica molecolare (MD) utilizzato per lo studio di modelli atomici di materiali metallici e' stato parallelizzato per piattaforme parallele MIMD (multiple instructions multiple data) utilizzando librerie del parallel virtual machine (PVM). Poiche' l'operazione di parallelizzazione ha implicato la modifica degli algoritmi seriali del codice, questi vengono descritti ripercorrendo i concetti fondamentali della meccanica statistica. Inoltre sono presentate le tecniche e le strategie di parallelizzazione utilizzate descrivendo in dettaglio il codice parallelo di MD: Risultati di benchmark su diverse piattaforme MIMD (IBM SP1, SP2, Cray T3D, cluster of workstations) permettono di analizzare le performances del codice in funzione delle differenti caratteristiche delle piattaforme parallele.

  15. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  16. How users organize electronic files on their workstations in the office environment: a preliminary study of personal information organization behaviour

    Directory of Open Access Journals (Sweden)

    Christopher S.G. Khoo

    2007-01-01

    Full Text Available An ongoing study of how people organize their computer files and folders on the hard disk of their office workstations. A questionnaire was used to collect information on the subjects, their work responsibilities and characteristics of their workstations. Data on file and folder names and file structure were extracted from the hard disk using a computer program STG FolderPrint Plus, DOS command and screen capture. A semi-structured interview collected information on subjects' strategies in naming and organizing files and folders, and in locating and retrieving files. The data were analysed mainly through qualitative analysis and content analysis. The subjects organized their folders in a variety of structures, from broad and shallow to narrow and deep hierarchies. One to three levels of folders is common. The labels for first level folders tended to be task-based or project-based. Most subjects located files by browsing the folder structure, with searching used as a last resort. The most common types of folder names were document type, organizational function or structure, and miscellaneous or temporary. The frequency of folders of different types appear related to the type of occupation.

  17. Automated integration of genomic physical mapping data via parallel simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  18. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  19. Automated methods for single-stranded DNA isolation and dideoxynucleotide DNA sequencing reactions on a robotic workstation

    International Nuclear Information System (INIS)

    Mardis, E.R.; Roe, B.A.

    1989-01-01

    Automated procedures have been developed for both the simultaneous isolation of 96 single-stranded M13 chimeric template DNAs in less than two hours, and for simultaneously pipetting 24 dideoxynucleotide sequencing reactions on a commercially available laboratory workstation. The DNA sequencing results obtained by either radiolabeled or fluorescent methods are consistent with the premise that automation of these portions of DNA sequencing projects will improve the reproducibility of the DNA isolation and the procedures for these normally labor-intensive steps provides an approach for rapid acquisition of large amounts of high quality, reproducible DNA sequence data

  20. KNBD: A Remote Kernel Block Server for Linux

    Science.gov (United States)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  1. Predicting Forearm Physical Exposures During Computer Work Using Self-Reports, Software-Recorded Computer Usage Patterns, and Anthropometric and Workstation Measurements.

    Science.gov (United States)

    Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T

    2017-12-15

    Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some

  2. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  3. SU-F-T-256: 4D IMRT Planning Using An Early Prototype GPU-Enabled Eclipse Workstation

    Energy Technology Data Exchange (ETDEWEB)

    Hagan, A; Modiri, A; Sawant, A [University of Maryland in Baltimore, Baltimore, MD (United States); Svatos, M [Varian Medical Systems, Palo Alto, CA (United States)

    2016-06-15

    Purpose: True 4D IMRT planning, based on simultaneous spatiotemporal optimization has been shown to significantly improve plan quality in lung radiotherapy. However, the high computational complexity associated with such planning represents a significant barrier to widespread clinical deployment. We introduce an early prototype GPU-enabled Eclipse workstation for inverse planning. To our knowledge, this is the first GPUintegrated Eclipse system demonstrating the potential for clinical translation of GPU computing on a major commercially-available TPS. Methods: The prototype system comprised of four NVIDIA Tesla K80 GPUs, with a maximum processing capability of 8.5 Tflops per K80 card. The system architecture consisted of three key modules: (i) a GPU-based inverse planning module using a highly-parallelizable, swarm intelligence-based global optimization algorithm, (ii) a GPU-based open-source b-spline deformable image registration module, Elastix, and (iii) a CUDA-based data management module. For evaluation, aperture fluence weights in an IMRT plan were optimized over 9 beams,166 apertures and 10 respiratory phases (14940 variables) for a lung cancer case (GTV = 95 cc, right lower lobe, 15 mm cranio-caudal motion). Sensitivity of the planning time and memory expense to parameter variations was quantified. Results: GPU-based inverse planning was significantly accelerated compared to its CPU counterpart (36 vs 488 min, for 10 phases, 10 search agents and 10 iterations). The optimized IMRT plan significantly improved OAR sparing compared to the original internal target volume (ITV)-based clinical plan, while maintaining prescribed tumor coverage. The dose-sparing improvements were: Esophagus Dmax 50%, Heart Dmax 42% and Spinal cord Dmax 25%. Conclusion: Our early prototype system demonstrates that through massive parallelization, computationally intense tasks such as 4D treatment planning can be accomplished in clinically feasible timeframes. With further

  4. SU-F-T-256: 4D IMRT Planning Using An Early Prototype GPU-Enabled Eclipse Workstation

    International Nuclear Information System (INIS)

    Hagan, A; Modiri, A; Sawant, A; Svatos, M

    2016-01-01

    Purpose: True 4D IMRT planning, based on simultaneous spatiotemporal optimization has been shown to significantly improve plan quality in lung radiotherapy. However, the high computational complexity associated with such planning represents a significant barrier to widespread clinical deployment. We introduce an early prototype GPU-enabled Eclipse workstation for inverse planning. To our knowledge, this is the first GPUintegrated Eclipse system demonstrating the potential for clinical translation of GPU computing on a major commercially-available TPS. Methods: The prototype system comprised of four NVIDIA Tesla K80 GPUs, with a maximum processing capability of 8.5 Tflops per K80 card. The system architecture consisted of three key modules: (i) a GPU-based inverse planning module using a highly-parallelizable, swarm intelligence-based global optimization algorithm, (ii) a GPU-based open-source b-spline deformable image registration module, Elastix, and (iii) a CUDA-based data management module. For evaluation, aperture fluence weights in an IMRT plan were optimized over 9 beams,166 apertures and 10 respiratory phases (14940 variables) for a lung cancer case (GTV = 95 cc, right lower lobe, 15 mm cranio-caudal motion). Sensitivity of the planning time and memory expense to parameter variations was quantified. Results: GPU-based inverse planning was significantly accelerated compared to its CPU counterpart (36 vs 488 min, for 10 phases, 10 search agents and 10 iterations). The optimized IMRT plan significantly improved OAR sparing compared to the original internal target volume (ITV)-based clinical plan, while maintaining prescribed tumor coverage. The dose-sparing improvements were: Esophagus Dmax 50%, Heart Dmax 42% and Spinal cord Dmax 25%. Conclusion: Our early prototype system demonstrates that through massive parallelization, computationally intense tasks such as 4D treatment planning can be accomplished in clinically feasible timeframes. With further

  5. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  6. Using a Cray Y-MP as an array processor for a RISC Workstation

    Science.gov (United States)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  7. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  8. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  9. Evaluation of training nurses to perform semi-automated three-dimensional left ventricular ejection fraction using a customised workstation-based training protocol.

    Science.gov (United States)

    Guppy-Coles, Kristyan B; Prasad, Sandhir B; Smith, Kym C; Hillier, Samuel; Lo, Ada; Atherton, John J

    2015-06-01

    We aimed to determine the feasibility of training cardiac nurses to evaluate left ventricular function utilising a semi-automated, workstation-based protocol on three dimensional echocardiography images. Assessment of left ventricular function by nurses is an attractive concept. Recent developments in three dimensional echocardiography coupled with border detection assistance have reduced inter- and intra-observer variability and analysis time. This could allow abbreviated training of nurses to assess cardiac function. A comparative, diagnostic accuracy study evaluating left ventricular ejection fraction assessment utilising a semi-automated, workstation-based protocol performed by echocardiography-naïve nurses on previously acquired three dimensional echocardiography images. Nine cardiac nurses underwent two brief lectures about cardiac anatomy, physiology and three dimensional left ventricular ejection fraction assessment, before a hands-on demonstration in 20 cases. We then selected 50 cases from our three dimensional echocardiography library based on optimal image quality with a broad range of left ventricular ejection fractions, which was quantified by two experienced sonographers and the average used as the comparator for the nurses. Nurses independently measured three dimensional left ventricular ejection fraction using the Auto lvq package with semi-automated border detection. The left ventricular ejection fraction range was 25-72% (70% with a left ventricular ejection fraction nurses showed excellent agreement with the sonographers. Minimal intra-observer variability was noted on both short-term (same day) and long-term (>2 weeks later) retest. It is feasible to train nurses to measure left ventricular ejection fraction utilising a semi-automated, workstation-based protocol on previously acquired three dimensional echocardiography images. Further study is needed to determine the feasibility of training nurses to acquire three dimensional echocardiography

  10. Interaction techniques for radiology workstations: impact on users' productivity

    Science.gov (United States)

    Moise, Adrian; Atkins, M. Stella

    2004-04-01

    As radiologists progress from reading images presented on film to modern computer systems with images presented on high-resolution displays, many new problems arise. Although the digital medium has many advantages, the radiologist"s job becomes cluttered with many new tasks related to image manipulation. This paper presents our solution for supporting radiologists" interpretation of digital images by automating image presentation during sequential interpretation steps. Our method supports scenario based interpretation, which group data temporally, according to the mental paradigm of the physician. We extended current hanging protocols with support for "stages". A stage reflects the presentation of digital information required to complete a single step within a complex task. We demonstrated the benefits of staging in a user study with 20 lay subjects involved in a visual conjunctive search for targets, similar to a radiology task of identifying anatomical abnormalities. We designed a task and a set of stimuli which allowed us to simulate the interpretation workflow from a typical radiology scenario - reading a chest computed radiography exam when a prior study is also available. The simulation was possible by abstracting the radiologist"s task and the basic workstation navigation functionality. We introduced "Stages," an interaction technique attuned to the radiologist"s interpretation task. Compared to the traditional user interface, Stages generated a 14% reduction in the average interpretation.

  11. Integrated model for line balancing with workstation inventory management

    Directory of Open Access Journals (Sweden)

    Dilip Roy

    2010-06-01

    Full Text Available In this paper, we address the optimization of an integrated line balancing process with workstation inventory management. While doing so, we have studied the interconnection between line balancing and its conversion process. Almost each and every moderate to large manufacturing industry depends on a long and integrated supply chain, consisting of inbound logistic, conversion process and outbound logistic. In this sense an approach addresses a very general problem of integrated line balancing. Research works reported in the literature so far mainly deals with minimization of cost for inbound and outbound logistic subsystems. In most of the cases conversion process has been ignored. We suggest a generic approach for linking the balancing of the line of production in the conversion area with the customers’ rate of demand in the market and for configuring the related stock chambers. Thus, the main aim of this paper is to translate the underlying problem in the form of mixed nonlinear programming problem and design the optimum supply chain so that the total inventory cost and the cost of balancing loss of the conversion process is jointly minimized and ideal cycle time of the production process is determined along with ideal sizes of the stock chambers. A numerical example has been added to demonstrate the suitability of our approach.

  12. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  13. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  14. Computer-aided diagnosis workstation and data base system for chest diagnosis based on multihelical CT images

    International Nuclear Information System (INIS)

    Satoh, H.; Niki, N.; Eguchi, K.; Masuda, H.; Machida, S.; Moriyama, N.

    2006-01-01

    We have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router, Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. The results of this study indicate that our computer-aided diagnosis workstation and network system can increase diagnostic speed, diagnostic accuracy and safety of medical information. (author)

  15. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  16. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  17. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  18. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  19. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  20. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  1. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM

  2. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.

  3. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  4. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  6. An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software

    Directory of Open Access Journals (Sweden)

    Crandall Ian

    2009-07-01

    Full Text Available Abstract Background Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. Methods The model incorporates two general principles: 1 collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2 commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. Results The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. Conclusion The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and

  7. An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software.

    Science.gov (United States)

    Suhanic, West; Crandall, Ian; Pennefather, Peter

    2009-07-17

    Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. The model incorporates two general principles: 1) collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2) commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and groups in a distributed and collaborative fashion. The workstation

  8. Intranet and Internet metrological workstation with photonic sensors and transmission

    Science.gov (United States)

    Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Dybko, Artur

    1999-05-01

    We describe in this paper a part of a telemetric network which consists of a workstation with photonic measurement and communication interfaces, structural fiber optic cabling (10/100BaseFX and CAN-FL), and photonic sensors with fiber optic interfaces. The station is equipped with direct photonic measurement interface and most common measuring standards converter (RS, GPIB) with fiber optic I/O CAN bus, O/E converters, LAN and modem ports. The station was connected to the Intranet (ipx/spx) and Internet (tcp/ip) with separate IP number and DNS, WINS names. Virtual measuring environment system program was written specially for such an Intranet and Internet station. The measurement system program communicated with the user via a Graphical User's Interface (GUI). The user has direct access to all functions of the measuring station system through appropriate layers of GUI: telemetric, transmission, visualization, processing, information, help and steering of the measuring system. We have carried out series of thorough simulation investigations and tests of the station using WWW subsystem of the Internet. We logged into the system through the LAN and via modem. The Internet metrological station works continuously under the address http://nms.ipe.pw.edu.pl/nms. The station and the system hear the short name NMS (from Network Measuring System).

  9. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  10. Uprising: An examination of sit-stand workstations, mental health and work ability in sedentary office workers, in Western Australia.

    Science.gov (United States)

    Tobin, Rochelle; Leavy, Justine; Jancey, Jonine

    2016-10-17

    Office-based staff spend around three quarters of their work day sitting. People who sit for long periods while at work are at greater risk of adverse health outcomes. The pilot study aimed to determine the effect of sit-stand workstations on office-based staff sedentary and physical activity behaviors, work ability and self-reported physical and mental health outcomes. A two-group pre-post study design assessed changes in sedentary and physical activity behaviors (time spent sitting, standing and stepping and sit-stand transitions and number of steps taken) work ability and physical and mental health. Physical activity behaviors were measured using activPAL activity monitors and self-reported data on work ability and physical and mental health were collected using an online questionnaire. Relative to the controls (n=19), the intervention group (n=18) significantly decreased time spent sitting by 100 minutes (pwork ability when compared to lifetime best (p=0.008). There were no significant differences for all other sedentary behavior, other workability outcomes, physical health or mental health outcomes at follow-up. The Uprising Study found that sit-stand workstations are an effective strategy to reduce occupational sitting time in office-based workers over a one month period.

  11. Ruling Out Brain CT Contraindications prior to Intravenous Thrombolysis: Diagnostic Equivalence between a Primary Interpretation Workstation and a Mobile Tablet Computer

    Directory of Open Access Journals (Sweden)

    Antonio J. Salazar

    2017-01-01

    Full Text Available Objective. The aim of this study was to evaluate the equivalence of brain CT interpretations performed using a diagnostic workstation and a mobile tablet computer, in a telestroke service. Materials and Methods. The ethics committee of our institution approved this retrospective study. A factorial design with 1452 interpretations was used. The assessed variables were the type of stroke classification, the presence of contraindications to the tPA administration, the presence of a hyperdense intracranial artery sign (HMCA, and the Alberta Stroke Program Early CT Score (ASPECTS score. These variables were evaluated to determine the effect that the reading system had on their magnitudes. Results. The achieved distribution of observed lesions using both the reading systems was not statistically different. The differences between the two reading systems to claim equivalence were 1.6% for hemorrhagic lesions, 4.5% for cases without lesion, and 5.2 for overall ischemic lesion. Equivalence was achieved at 2.1% for ASPECTS ≤ 6, 6.5% for the presence of imaging contraindication to the tPA administration, and 7.2% for the presence of HMCA. Conclusion. The diagnostic performance for detecting acute stroke is likely equivalent whether a tablet computer or a diagnostic workstation is used or not.

  12. Computer-aided diagnosis workstation and teleradiology network system for chest diagnosis using the web medical image conference system with a new information security solution

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kaneko, Masahiro; Kakinuma, Ryutaro; Moriyama, Noriyuki

    2010-03-01

    Diagnostic MDCT imaging requires a considerable number of images to be read. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. Because of such a background, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis. We also have developed the teleradiology network system by using web medical image conference system. In the teleradiology network system, the security of information network is very important subjects. Our teleradiology network system can perform Web medical image conference in the medical institutions of a remote place using the web medical image conference system. We completed the basic proof experiment of the web medical image conference system with information security solution. We can share the screen of web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with the workstation that builds in some diagnostic assistance methods. Biometric face authentication used on site of teleradiology makes "Encryption of file" and "Success in login" effective. Our Privacy and information security technology of information security solution ensures compliance with Japanese regulations. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new teleradiology network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis

  13. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  14. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  15. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  16. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  17. Microbial Diagnostic Array Workstation (MDAW: a web server for diagnostic array data storage, sharing and analysis

    Directory of Open Access Journals (Sweden)

    Chang Yung-Fu

    2008-09-01

    Full Text Available Abstract Background Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Methods Microbial Diagnostic Array Workstation (MDAW is a database driven application designed in MS Access and front end designed in ASP.NET. Conclusion MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays.

  18. PWR [pressurized water reactor] optimal reload configuration with an intelligent workstation

    International Nuclear Information System (INIS)

    Greek, K.J.; Robinson, A.H.

    1990-01-01

    In a previous paper, the implementation of a pressurized water reactor (PWR) refueling expert system that combined object-oriented programming in Smalltalk and a FORTRAN power calculation to evaluate loading patterns was discussed. The expert system applies heuristics and constraints that lead the search toward an optimal configuration. Its rate of improvement depends on the expertise coded for a search and the loading pattern from where the search begins. Due to its complexity, however, the solution normally cannot be served by a rule-based expert system alone. A knowledge base may take years of development before final acceptance. Also, the human pattern-matching capability to view a two-dimensional power profile, recognize an imbalance, and select an appropriate response has not yet been surpassed by a rule-based system. The user should be given the ability to take control of the search if he believes the solution needs a new direction and should be able to configure a loading pattern and resume the search. This paper introduces the workstation features of Shuffle important to aid the user to manipulate the configuration and retain a record of the solution

  19. Porting of serial molecular dynamics code on MIMD platforms

    International Nuclear Information System (INIS)

    Celino, M.

    1995-05-01

    A molecular Dynamics (MD) code, utilized for the study of atomistic models of metallic systems has been parallelized for MIMD (Multiple Instructions Multiple Data) parallel platforms by means of the Parallel Virtual Machine (PVM) message passing library. Since the parallelization implies modifications of the sequential algorithms, these are described from the point of view of the Statistical Mechanics theory. Furthermore, techniques and parallelization strategies utilized and the MD parallel code are described in detail. Benchmarks on several MIMD platforms (IBM SP1 and SP2, Cray T3D, Cluster of workstations) allow performances evaluation of the code versus the different characteristics of the parallel platforms

  20. Influence of Paralleling Dies and Paralleling Half-Bridges on Transient Current Distribution in Multichip Power Modules

    DEFF Research Database (Denmark)

    Li, Helong; Zhou, Wei; Wang, Xiongfei

    2018-01-01

    This paper addresses the transient current distribution in the multichip half-bridge power modules, where two types of paralleling connections with different current commutation mechanisms are considered: paralleling dies and paralleling half-bridges. It reveals that with paralleling dies, both t...

  1. OFF-SITE SMARTPHONE VS. STANDARD WORKSTATION IN THE RADIOGRAPHIC DIAGNOSIS OF SMALL INTESTINAL MECHANICAL OBSTRUCTION IN DOGS AND CATS.

    Science.gov (United States)

    Noel, Peter G; Fischetti, Anthony J; Moore, George E; Le Roux, Alexandre B

    2016-09-01

    Off-site consultations by board-certified veterinary radiologists benefit residents and emergency clinicians by providing immediate feedback and potentially improving patient outcome. Smartphone devices and compressed images transmitted by email or text greatly facilitate availability of these off-site consultations. Criticism of a smartphone interface for off-site consultation is mostly directed at image degradation relative to the standard radiographic viewing room and monitors. The purpose of this retrospective, cross-sectional, methods comparison study was to compare the accuracy of abdominal radiographs in two imaging interfaces (Joint Photographic Experts Group, off-site, smartphone vs. Digital Imaging and Communications in Medicine, on-site, standard workstation) for the diagnosis of small intestinal mechanical obstruction in vomiting dogs and cats. Two board-certified radiologists graded randomized abdominal radiographs using a five-point Likert scale for the presence of mechanical obstruction in 100 dogs or cats presenting for vomiting. The area under the receiver operator characteristic curves for both imaging interfaces was high. The accuracy of the smartphone and traditional workstation was not statistically significantly different for either reviewer (P = 0.384 and P = 0.536). Correlation coefficients were 0.821 and 0.705 for each reviewer when the same radiographic study was viewed in different formats. Accuracy differences between radiologists were potentially related to years of experience. We conclude that off-site expert consultation with a smartphone provides an acceptable interface for accurate diagnosis of small intestinal mechanical obstruction in dogs and cat. © 2016 American College of Veterinary Radiology.

  2. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  3. Concurrent use of data base and graphics computer workstations to provide graphic access to large, complex data bases for robotics control of nuclear surveillance and maintenance

    International Nuclear Information System (INIS)

    Dalton, G.R.; Tulenko, J.S.; Zhou, X.

    1990-01-01

    The University of Florida is part of a multiuniversity research effort, sponsored by the US Department of Energy which is under way to develop and deploy an advanced semi-autonomous robotic system for use in nuclear power stations. This paper reports on the development of the computer tools necessary to gain convenient graphic access to the intelligence implicit in a large complex data base such as that in a nuclear reactor plant. This program is integrated as a man/machine interface within the larger context of the total computerized robotic planning and control system. The portion of the project described here addresses the connection between the three-dimensional displays on an interactive graphic workstation and a data-base computer running a large data-base server program. Programming the two computers to work together to accept graphic queries and return answers on the graphic workstation is a key part of the interactive capability developed

  4. Contamination control in HVAC systems for aseptic processing area. Part I: Case study of the airflow velocity in a unidirectional airflow workstation with computational fluid dynamics.

    Science.gov (United States)

    Ogawa, M

    2000-01-01

    A unidirectional airflow workstation for processing a sterile pharmaceutical product is required to be "Grade A," according to EU-GMP and WHO-GMP. These regulations have employed the wording of "laminar airflow" for unidirectional airflow, with an unclear definition given. This seems to have allowed many reports to describe discussion of airflow velocity only. The guidance values as to the velocity are expressed in various words of 90 ft/min, 0.45 m/sec, 0.3 m/sec, +/- 20%, or "homogeneous air speed." It has been also little clarified how variation in airflow velocity gives influences on contamination control of a workstation working with varying key characteristics, such as ceiling height, internal heat load, internal particle generation, etc. The present author has revealed following points from a case study using Computational Fluid Dynamics: the airflow characteristic in Grade A area shows no significant changes with varying the velocity of supplied airflow, and the particles generated from the operator will be exhausted outside Grade A area without contamination.

  5. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  6. Inexpensive driver for stereo videogame glasses

    Science.gov (United States)

    Pique, Michael; Coogan, Anthony

    1990-09-01

    We have adapted home videogame glasses from Sega as workstation stereo viewers. A small (4x7x9 cm.) box of electronics receives sync signals in parallel with the monitor (either separate ROB-Sync or composite video) and drives the glasses.The view is dimmer than with costlier shutters, there is more ghosting, and the user is feuered by the wires. But the glasses are so much cheaper than the full-screen shutters (250 instead of about 10 000) that it is practical to provide the benefits of stereo to many more workstation users. We are using them with Sun TAAC-1 workstations; the interlaced video can also be recorded on ordinary NTSC videotape and played on television monitors.

  7. Bio-optofluidics and Bio-photonics: Programmable Phase Optics activities at DTU Fotonik

    DEFF Research Database (Denmark)

    Bañas, Andrew Rafael; Palima, Darwin; Pedersen, Finn

    We present ongoing research and development activities for constructing a compact next generation BioPhotonics Workstation and a Bio-optofluidic Cell Sorter (cell-BOCS) for all-optical micromanipulation platforms utilizing low numerical aperture beam geometries. Unlike conventional high NA optical...... tweezers, the BioPhotonics workstation is e.g. capable of long range 3D manipulation. This enables a variety of biological studies such as manipulation of intricate microfabricated assemblies or for automated and parallel optofluidic cell sorting. To further reduce its overhead, we propose ways of making...... the BioPhotonics Workstation platform more photon efficient by studying the 3D distribution of the counter propagating beams and utilizing the Generalized Phase Contrast (GPC) method for illuminating the applied spatial light modulators....

  8. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  9. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  10. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  11. UFMulti: A new parallel processing software system for HEP

    Science.gov (United States)

    Avery, Paul; White, Andrew

    1989-12-01

    UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstation or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future.

  12. A simple multiprocessor management system for event-parallel computing

    International Nuclear Information System (INIS)

    Bracker, S.; Gounder, K.; Hendrix, K.; Summers, D.

    1996-01-01

    Offline software using Transmission Control Protocol/Internet Protocol (TCP/IP) sockets to distribute particle physics events to multiple UNIX/RISC workstations is described. A modular, building block approach was taken that allowed tailoring to solve specific tasks efficiently and simply as they arose. The modest, initial cost was having to learn about sockets for interprocess communication. This multiprocessor management software has been used to control the reconstruction of eight billion raw data events from Fermilab Experiment E791

  13. UFMULTI: A new parallel processing software system for HEP

    International Nuclear Information System (INIS)

    Avery, P.; White, A.

    1989-01-01

    UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstations or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future. (orig.)

  14. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  15. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  16. A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy

    Science.gov (United States)

    Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.

    1989-05-01

    A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.

  17. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  18. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  19. An Iterative Approach To Development Of A PACS Display Workstation

    Science.gov (United States)

    O'Malley, Kathleen G.

    1989-05-01

    An iterative prototyping approach has been used in the development of requirements for a new user interface for the display workstation in the CommView system product line. This approach involves many steps, including development of the preliminary concept, validation and ranking of ideas within that concept, prototyping, evaluating, and revising. We describe in this paper the process undertaken to design and evaluate the new user interface. Staff at Abbott Northwestern Hospital, Bowman Gray/Baptist Hospital Medical Center, Duke University Medical Center, Georgetown University Medical Center and Robert Wood Johnson University Hospital participated in various aspects of the study. The subject population included radiologists, residents, technologists and staff physicians from several areas in the hospitals. Subjects participated in in-depth interviews, answered questionnaires, and performed specific tasks, to aid our development process. We feel this method has resulted in a product that will achieve a high level of customer satisfaction, developed in less time than a traditional approach. Some of the reasons we believe in the value of this approach are: • Users may not be able to describe their needs in terms that designers are expecting, leading to misinterpretation; • Users may not be able to choose between options without seeing them; • Users needs and choices evolve with experience; • Users true choices and needs may not seem logical to one not performing those tasks (i.e., the designers).

  20. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...