WorldWideScience

Sample records for high-level parallel programming

  1. Parallel Libraries to support High-Level Programming

    DEFF Research Database (Denmark)

    Larsen, Morten Nørgaard

    and the Microsoft .NET iv framework. Normally, one would not directly think of the .NET framework when talking scientific applications, but Microsoft has in the last couple of versions of .NET introduce a number of tools for writing parallel and high performance code. The first section examines how programmers can...

  2. Adapting high-level language programs for parallel processing using data flow

    Science.gov (United States)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  3. Connectionist Models and Parallelism in High Level Vision.

    Science.gov (United States)

    1985-01-01

    GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These

  4. The high level vibration test program

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.

    1989-01-01

    As part of cooperative agreements between the US and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. A modified earthquake excitation was applied and the excitation level was increased carefully to minimize the cumulative fatigue damage due to the intermediate level excitations. Since the piping was pressurized, and the high level earthquake excitation was repeated several times, it was possible to investigate the effects of ratchetting and fatigue as well. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. Crack growth was carefully monitored during the next two additional maximum excitation runs. The final test resulted in a maximum crack depth of approximately 94% of the wall thickness. The HLVT (high level vibration test) program has enhanced understanding of the behavior of piping systems under severe earthquake loading. As in other tests to failure of piping components, it has demonstrated significant seismic margin in nuclear power plant piping

  5. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  6. The High Level Vibration Test Program

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.

    1989-01-01

    As part of cooperative agreements between the United States and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was designed by modifying the 1/2.5 scale model of the PWR primary coolant loop. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. This paper describes the overall plan, input motion development, test procedure, test results and comparisons with pre-test analysis. 4 refs., 16 figs., 2 tabs

  7. The High Level Vibration Test program

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.

    1990-01-01

    As part of cooperative agreements between the United States and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was designed by modifying the 1/2.5 scale model of the pressurized water reactor primary coolant loop. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. This paper describes the overall plan, input motion development, test procedure, test results and comparisons with pre-test analysis

  8. Satin: A high-level and efficient grid programming model

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.; Wrzesinska, G.; Jacobs, C.J.H.; Bal, H.E.

    2010-01-01

    Computational grids have an enormous potential to provide compute power. However, this power remains largely unexploited today for most applications, except trivially parallel programs. Developing parallel grid applications simply is too difficult. Grids introduce several problems not encountered

  9. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  10. High-level waste immobilization program: an overview

    International Nuclear Information System (INIS)

    Bonner, W.R.

    1979-09-01

    The High-Level Waste Immobilization Program is providing technology to allow safe, affordable immobilization and disposal of nuclear waste. Waste forms and processes are being developed on a schedule consistent with national needs for immobilization of high-level wastes stored at Savannah River, Hanford, Idaho National Engineering Laboratory, and West Valley, New York. This technology is directly applicable to high-level wastes from potential reprocessing of spent nuclear fuel. The program is removing one more obstacle previously seen as a potential restriction on the use and further development of nuclear power, and is thus meeting a critical technological need within the national objective of energy independence

  11. Overview: Defense high-level waste technology program

    International Nuclear Information System (INIS)

    Shupe, M.W.; Turner, D.A.

    1987-01-01

    Defense high-level waste generated by atomic energy defense activities is stored on an interim basis at three U.S. Department of Energy (DOE) operating locations; the Savannah River Plant in South Carolina, the Hanford Site in Washington, and the Idaho National Engineering Laboratory in Idaho. Responsibility for the permanent disposal of this waste resides with DOE's Office of Defense Waste and Transportation Management. The objective of the Defense High-Level Wast Technology Program is to develop the technology for ending interim storage and achieving permanent disposal of all U.S. defense high-level waste. New and readily retrievable high-level waste are immobilized for disposal in a geologic repository. Other high-level waste will be stabilized in-place if, after completion of the National Environmental Policy Act (NEPA) process, it is determined, on a site-specific basis, that this option is safe, cost effective and environmentally sound. The immediate program focus is on implementing the waste disposal strategy selected in compliance with the NEPA process at Savannah River, while continuing progress toward development of final waste disposal strategies at Hanford and Idaho. This paper presents an overview of the technology development program which supports these waste management activities and an assessment of the impact that recent and anticipated legal and institutional developments are expected to have on the program

  12. Long-term high-level waste technology program

    International Nuclear Information System (INIS)

    1980-04-01

    The Department of Energy (DOE) is conducting a comprehensive program to isolate all US nuclear wastes from the human environment. The DOE Office of Nuclear Energy - Waste (NEW) has full responsibility for managing the high-level wastes resulting from defense activities and additional responsiblity for providing the technology to manage existing commercial high-level wastes and any that may be generated in one of several alternative fuel cycles. Responsibilities of the Three Divisions of DOE-NEW are shown. This strategy document presents the research and development plan of the Division of Waste Products for long-term immobilization of the high-level radioactive wastes resulting from chemical processing of nuclear reactor fuels and targets. These high-level wastes contain more than 99% of the residual radionuclides produced in the fuels and targets during reactor operations. They include essentially all the fission products and most of the actinides that were not recovered for use

  13. High-level waste management technology program plan

    International Nuclear Information System (INIS)

    Harmon, H.D.

    1995-01-01

    The purpose of this plan is to document the integrated technology program plan for the Savannah River Site (SRS) High-Level Waste (HLW) Management System. The mission of the SRS HLW System is to receive and store SRS high-level wastes in a see and environmentally sound, and to convert these wastes into forms suitable for final disposal. These final disposal forms are borosilicate glass to be sent to the Federal Repository, Saltstone grout to be disposed of on site, and treated waste water to be released to the environment via a permitted outfall. Thus, the technology development activities described herein are those activities required to enable successful accomplishment of this mission. The technology program is based on specific needs of the SRS HLW System and organized following the systems engineering level 3 functions. Technology needs for each level 3 function are listed as reference, enhancements, and alternatives. Finally, FY-95 funding, deliverables, and schedules are s in Chapter IV with details on the specific tasks that are funded in FY-95 provided in Appendix A. The information in this report represents the vision of activities as defined at the beginning of the fiscal year. Depending on emergent issues, funding changes, and other factors, programs and milestones may be adjusted during the fiscal year. The FY-95 SRS HLW technology program strongly emphasizes startup support for the Defense Waste Processing Facility and In-Tank Precipitation. Closure of technical issues associated with these operations has been given highest priority. Consequently, efforts on longer term enhancements and alternatives are receiving minimal funding. However, High-Level Waste Management is committed to participation in the national Radioactive Waste Tank Remediation Technology Focus Area. 4 refs., 5 figs., 9 tabs

  14. High-level waste management technology program plan

    Energy Technology Data Exchange (ETDEWEB)

    Harmon, H.D.

    1995-01-01

    The purpose of this plan is to document the integrated technology program plan for the Savannah River Site (SRS) High-Level Waste (HLW) Management System. The mission of the SRS HLW System is to receive and store SRS high-level wastes in a see and environmentally sound, and to convert these wastes into forms suitable for final disposal. These final disposal forms are borosilicate glass to be sent to the Federal Repository, Saltstone grout to be disposed of on site, and treated waste water to be released to the environment via a permitted outfall. Thus, the technology development activities described herein are those activities required to enable successful accomplishment of this mission. The technology program is based on specific needs of the SRS HLW System and organized following the systems engineering level 3 functions. Technology needs for each level 3 function are listed as reference, enhancements, and alternatives. Finally, FY-95 funding, deliverables, and schedules are s in Chapter IV with details on the specific tasks that are funded in FY-95 provided in Appendix A. The information in this report represents the vision of activities as defined at the beginning of the fiscal year. Depending on emergent issues, funding changes, and other factors, programs and milestones may be adjusted during the fiscal year. The FY-95 SRS HLW technology program strongly emphasizes startup support for the Defense Waste Processing Facility and In-Tank Precipitation. Closure of technical issues associated with these operations has been given highest priority. Consequently, efforts on longer term enhancements and alternatives are receiving minimal funding. However, High-Level Waste Management is committed to participation in the national Radioactive Waste Tank Remediation Technology Focus Area. 4 refs., 5 figs., 9 tabs.

  15. User-Defined Data Distributions in High-Level Programming Languages

    Science.gov (United States)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  16. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  17. High-level waste program integration within the DOE complex

    International Nuclear Information System (INIS)

    Valentine, J.H.; Malone, K.; Schaus, P.S.

    1998-03-01

    Eleven major Department of Energy (DOE) site contractors were chartered by the Assistant Secretary to use a systems engineering approach to develop and evaluate technically defensible cost savings opportunities across the complex. Known as the complex-wide Environmental Management Integration (EMI), this process evaluated all the major DOE waste streams including high level waste (HLW). Across the DOE complex, this waste stream has the highest life cycle cost and is scheduled to take until at least 2035 before all HLW is processed for disposal. Technical contract experts from the four DOE sites that manage high level waste participated in the integration analysis: Hanford, Savannah River Site (SRS), Idaho National Engineering and Environmental Laboratory (INEEL), and West Valley Demonstration Project (WVDP). In addition, subject matter experts from the Yucca Mountain Project and the Tanks Focus Area participated in the analysis. Also, departmental representatives from the US Department of Energy Headquarters (DOE-HQ) monitored the analysis and results. Workouts were held throughout the year to develop recommendations to achieve a complex-wide integrated program. From this effort, the HLW Environmental Management (EM) Team identified a set of programmatic and technical opportunities that could result in potential cost savings and avoidance in excess of $18 billion and an accelerated completion of the HLW mission by seven years. The cost savings, schedule improvements, and volume reduction are attributed to a multifaceted HLW treatment disposal strategy which involves waste pretreatment, standardized waste matrices, risk-based retrieval, early development and deployment of a shipping system for glass canisters, and reasonable, low cost tank closure

  18. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  19. Defense High-Level Waste Leaching Mechanisms Program. Final report

    International Nuclear Information System (INIS)

    Mendel, J.E.

    1984-08-01

    The Defense High-Level Waste Leaching Mechanisms Program brought six major US laboratories together for three years of cooperative research. The participants reached a consensus that solubility of the leached glass species, particularly solubility in the altered surface layer, is the dominant factor controlling the leaching behavior of defense waste glass in a system in which the flow of leachant is constrained, as it will be in a deep geologic repository. Also, once the surface of waste glass is contacted by ground water, the kinetics of establishing solubility control are relatively rapid. The concentrations of leached species reach saturation, or steady-state concentrations, within a few months to a year at 70 to 90 0 C. Thus, reaction kinetics, which were the main subject of earlier leaching mechanisms studies, are now shown to assume much less importance. The dominance of solubility means that the leach rate is, in fact, directly proportional to ground water flow rate. Doubling the flow rate doubles the effective leach rate. This relationship is expected to obtain in most, if not all, repository situations

  20. Defense High-Level Waste Leaching Mechanisms Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mendel, J.E. (compiler)

    1984-08-01

    The Defense High-Level Waste Leaching Mechanisms Program brought six major US laboratories together for three years of cooperative research. The participants reached a consensus that solubility of the leached glass species, particularly solubility in the altered surface layer, is the dominant factor controlling the leaching behavior of defense waste glass in a system in which the flow of leachant is constrained, as it will be in a deep geologic repository. Also, once the surface of waste glass is contacted by ground water, the kinetics of establishing solubility control are relatively rapid. The concentrations of leached species reach saturation, or steady-state concentrations, within a few months to a year at 70 to 90/sup 0/C. Thus, reaction kinetics, which were the main subject of earlier leaching mechanisms studies, are now shown to assume much less importance. The dominance of solubility means that the leach rate is, in fact, directly proportional to ground water flow rate. Doubling the flow rate doubles the effective leach rate. This relationship is expected to obtain in most, if not all, repository situations.

  1. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  2. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    Energy Technology Data Exchange (ETDEWEB)

    Bylaska, Eric J., E-mail: Eric.Bylaska@pnnl.gov [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, P.O. Box 999, Richland, Washington 99352 (United States); Weare, Jonathan Q., E-mail: weare@uchicago.edu [Department of Mathematics, University of Chicago, Chicago, Illinois 60637 (United States); Weare, John H., E-mail: jweare@ucsd.edu [Department of Chemistry and Biochemistry, University of California, San Diego, La Jolla, California 92093 (United States)

    2013-08-21

    to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H{sub 2}O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.

  3. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    International Nuclear Information System (INIS)

    Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.

    2013-01-01

    distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H 2 O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step

  4. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations.

    Science.gov (United States)

    Bylaska, Eric J; Weare, Jonathan Q; Weare, John H

    2013-08-21

    distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.

  5. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  6. Hanford high level waste: Sample Exchange/Evaluation (SEE) Program

    International Nuclear Information System (INIS)

    King, A.G.

    1994-08-01

    The Pacific Northwest Laboratory (PNL)/Analytical Chemistry Laboratory (ACL) and the Westinghouse Hanford Company (WHC)/Process Analytical Laboratory (PAL) provide analytical support services to various environmental restoration and waste management projects/programs at Hanford. In response to a US Department of Energy -- Richland Field Office (DOE-RL) audit, which questioned the comparability of analytical methods employed at each laboratory, the Sample Exchange/Exchange (SEE) program was initiated. The SEE Program is a selfassessment program designed to compare analytical methods of the PAL and ACL laboratories using sitespecific waste material. The SEE program is managed by a collaborative, the Quality Assurance Triad (Triad). Triad membership is made up of representatives from the WHC/PAL, PNL/ACL, and WHC Hanford Analytical Services Management (HASM) organizations. The Triad works together to design/evaluate/implement each phase of the SEE Program

  7. French high level nuclear waste program: key research areas

    International Nuclear Information System (INIS)

    Sombret, G.

    1985-09-01

    The most important aspects of this research program concern disposal safety: the long-term behavior and sensitivity of the materials to the variability inherent in industrial processes, and the characterization of the final product. This research requires different investigations involving various scientific fields, and implements radioactive and non-radioactive glass samples as well as industrial scale glass blocks. Certain studies have now been completed; others are still in progress

  8. The FORCE: A highly portable parallel programming language

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.

  9. The FORCE - A highly portable parallel programming language

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.

  10. Future directions of defense programs high-level waste technology programs

    International Nuclear Information System (INIS)

    Chee, T.C.; Shupe, M.W.; Turner, D.A.; Campbell, M.H.

    1987-01-01

    The Department of Energy has been managing high-level waste from the production of nuclear materials for defense activities over the last forty years. An objective for the Defense Waste and Transportation Management program is to develop technology which ensures the safe, permanent disposal of all defense radioactive wastes. Technology programs are underway to address the long-term strategy for permanent disposal of high-level waste generated at each Department of Energy site. Technology is being developed for assessing the hazards, environmental impacts, and costs of each long-term disposal alternative for selection and implementation. This paper addresses key technology development areas, and consideration of recent regulatory requirements associated with the long-term management of defense radioactive high-level waste

  11. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  12. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  13. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  14. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  15. The Canadian program for management of spent fuel and high level wastes

    International Nuclear Information System (INIS)

    Barnes, R.W.; Mayman, S.A.

    A brief history and description of the nuclear power program in Canada is given. Schedules and programs are described for storing spent fuel in station fuel bays, centralized water pool storage facilities, concrete canisters, convection vaults, and rock or salt formations. High-level wastes will be retrievable initially, therefore the focus is on storage in mined cavities. The methods developed for high-level waste storage/disposal will ideally be flexible enough to accommodate spent fuel. (E.C.B.)

  16. Compiling Scientific Programs for Scalable Parallel Systems

    National Research Council Canada - National Science Library

    Kennedy, Ken

    2001-01-01

    ...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

  17. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  18. US program for the immobilization of high-level nuclear wastes

    International Nuclear Information System (INIS)

    Crandall, J.L.

    1979-01-01

    A program has been developed for long-term management of high-level nuclear waste. The Savannah River Operations Office of the US Department of Energy is acting as the lead office for this program with technical advice from the E.I. du Pont de Nemours and Company. The purpose of the long-term program is to immobilize the DOE high-level waste in forms that act as highly efficient barriers against radionuclide release to the disposal site and to provide technology for similar treatment of commercial high-level waste in case reprocessing of commercial nuclear fuels is ever resumed. Descriptions of existing DOE and commercial wastes, program strategy, program expenditures, development of waste forms, evaluation and selection of waste forms, regulatory aspects of waste form selection, project schedules, and cost estimates for immobilization facilities are discussed

  19. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  20. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  1. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  2. Status of the high-level nuclear waste disposal program in Japan

    International Nuclear Information System (INIS)

    Uematsu, K.

    1985-01-01

    The Japan Atomic Energy Commission (JAEC) initiated a high-level radioactive waste disposal program in 1976. Since then, the Advisory Committee on Radioactive Waste Management of JAEC has revised the program twice. The latest revision was issued in 1984. The committee recommended a four-phase program and the last phase calls for the beginning of emplacement of the high-level nuclear waste into a selected repository in the Year 2000. The first phase is already completed, and the second phase of this decade calls for the selection of a candidate disposal site and the conducting of the RandD of waste disposal in an underground research laboratory and in a hot test facility. This paper covers the current status of the high-level nuclear waste disposal program in Japan

  3. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  4. IAEA coordinated research program on the evaluation of solidified high-level radioactive waste products

    International Nuclear Information System (INIS)

    Grover, J.R.; Schneider, K.J.

    1979-01-01

    A coordinated research program on the evaluation of solidified high-level radioactive waste products has been active with the IAEA since 1976. The program's objectives are to integrate research and to provide a data bank on an international basis in this subject area. Results and considerations to date are presented

  5. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  6. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  7. Radioactive Waste Management Research Program Plan for high-level waste: 1987

    International Nuclear Information System (INIS)

    1987-05-01

    This plan will identify and resolve technical and scientific issues involved in the NRC's licensing and regulation of disposal systems intended to isolate high level hazardous radioactive wastes (HLW) from the human environment. The plan describes the program goals, discusses the research approach to be used, lays out peer review procedures, discusses the history and development of the high level radioactive waste problem and the research effort to date and describes study objectives and research programs in the areas of materials and engineering, hydrology and geochemistry, and compliance assessment and modeling. The plan also details the cooperative interactions with international waste management research programs. Proposed Earth Science Seismotectonic Research Program plan for radioactive waste facilities is appended

  8. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  9. The kpx, a program analyzer for parallelization

    International Nuclear Information System (INIS)

    Matsuyama, Yuji; Orii, Shigeo; Ota, Toshiro; Kume, Etsuo; Aikawa, Hiroshi.

    1997-03-01

    The kpx is a program analyzer, developed as a common technological basis for promoting parallel processing. The kpx consists of three tools. The first is ktool, that shows how much execution time is spent in program segments. The second is ptool, that shows parallelization overhead on the Paragon system. The last is xtool, that shows parallelization overhead on the VPP system. The kpx, designed to work for any FORTRAN cord on any UNIX computer, is confirmed to work well after testing on Paragon, SP2, SR2201, VPP500, VPP300, Monte-4, SX-4 and T90. (author)

  10. Speedup predictions on large scientific parallel programs

    International Nuclear Information System (INIS)

    Williams, E.; Bobrowicz, F.

    1985-01-01

    How much speedup can we expect for large scientific parallel programs running on supercomputers. For insight into this problem we extend the parallel processing environment currently existing on the Cray X-MP (a shared memory multiprocessor with at most four processors) to a simulated N-processor environment, where N greater than or equal to 1. Several large scientific parallel programs from Los Alamos National Laboratory were run in this simulated environment, and speedups were predicted. A speedup of 14.4 on 16 processors was measured for one of the three most used codes at the Laboratory

  11. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  12. Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language

    Science.gov (United States)

    Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud

    2017-08-01

    This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.

  13. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  14. Environmental program planning for the proposed high-level nuclear waste repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    1987-08-01

    This report was prepared to illustrate the policy and actions that the State of Nevada believe are required to assure that the quality of the environment is adequately considered during the course of the DOE work at the proposed high-level nuclear waste repository at Yucca Mountain. The report describes the DOE environmental program and the studies planned by NWPO to reflect the State's position toward environmental protection. 41 refs., 2 figs., 11 tabs

  15. Hanford long-term high-level waste management program overview

    International Nuclear Information System (INIS)

    Reep, I.E.

    1978-05-01

    The objective is the long-term disposition of the defense high-level radioactive waste which will remain upon completion of the interim waste management program in the mid-1980s, plus any additional high-level defense waste resulting from the future operation of N Reactor and the Purex Plant. The high-level radioactive waste which will exist in the mid-1980s and is addressed by this plan consists of approximately 3,300,000 ft 3 of damp salt cake stored in single-shell and double-shell waste tanks, 1,500,000 ft 3 of damp sludge stored in single-shell and double-shell waste tanks, 11,000,000 gallons of residual liquor stored in double-shell waste tanks, 3,000,000 gallons of liquid wastes stored in double-shell waste tanks awaiting solidification, and 2,900 capsules of 90 SR and 137 Cs compounds stored in water basins. Final quantities of waste may be 5 to 10% greater, depending on the future operation of N Reactor and the Purex Plant and the application of waste treatment techniques currently under study to reduce the inventory of residual liquor. In this report, the high-level radioactive waste addressed by this plan is briefly described, the major alternatives and strategies for long-term waste management are discussed, and a description of the long-term high-level waste management program is presented. Separate plans are being prepared for the long-term management of radioactive wastes which exist in other forms. 14 figures

  16. Development of knowledge building program concerning about high-level radioactive waste disposal

    International Nuclear Information System (INIS)

    Kimura, Hiroshi; Yamada, Kazuhiro; Takase, Hiroyasu

    2005-01-01

    Acquirement of knowledge about the high-level radioactive waste (HLW) disposal is one of the important factors for public to determine the social acceptance of HLW disposal. However in Japan, public do not have knowledge about HLW and its disposal sufficiently. In this work, we developed the knowledge building program concerning about HLW disposal based on Nonaka, and Takeuchi's SECI spiral model in knowledge management, and carried to the experiment on this program. In the results, we found that the participants' knowledge about the HLW disposal increased and changed from misunderstanding' or 'assuming' to 'facts' or 'consideration' through this experimental program. These results said that the experimental program leads participants to have higher quality of knowledge about the HLW disposal. In consequence, this knowledge building program may be effective in the acquirement of high quality knowledge. (author)

  17. Michigan high-level radioactive waste program. Technical progress report for 1985

    International Nuclear Information System (INIS)

    1986-01-01

    In 1985, five crystalline rock formations located in Michigan's Upper Peninsula were under consideration in the regional phase of the Department of Energy's (DOE) search for the site of the nation's second high-level radioactive waste repository. The Michigan Department of Public Health has been designated by the Governor as lead state agency in matters related to high-level radioactive waste (HLRW). Mr. Lee E. Jager, Chief of the Department's Bureau of Environmental and Occupational Health, has been designated as the state contact person in this matter, and the Bureau's Division of Radiological Health, Office of Radioactive Waste Management (ORWM), has been designated to provide staff support. Recognizing that adequate state involvement in the various aspects of the Federal high-level radioactive waste (HLRW) programs would require a range of expertise beyond the scope of any single state agency, Governor Blanchard established the High-Level Radioactive Waste Task Force in 1983. In support of the Task Force efforts concerning the implementation of its change, the Department negotiated and concluded an agreement with the DOE, under which federal funds are provided to support state HLRW activities. This report outlines state activities for the calendar year 1985, funded under that agreement

  18. An integrated approach to strategic planning in the civilian high-level radioactive waste management program

    International Nuclear Information System (INIS)

    Sprecher, W.M.; Katz, J.; Redmond, R.J.

    1992-01-01

    This paper describes the approach that the Office of Civilian Radioactive Waste Management (OCRWM) of the Department of Energy (DOE) is taking to the task of strategic planning for the civilian high-level radioactive waste management program. It highlights selected planning products and activities that have emerged over the past year. It demonstrates that this approach is an integrated one, both in the sense of being systematic on the program level but also as a component of DOE strategic planning efforts. Lastly, it indicates that OCRWM strategic planning takes place in a dynamic environment and consequently is a process that is still evolving in response to the demands placed upon it

  19. From sequential to parallel programming with patterns

    CERN Document Server

    CERN. Geneva

    2018-01-01

    To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

  20. High-level Programming and Symbolic Reasoning on IoT Resource Constrained Devices

    Directory of Open Access Journals (Sweden)

    Sal vatore Gaglio

    2015-05-01

    Full Text Available While the vision of Internet of Things (IoT is rather inspiring, its practical implementation remains challenging. Conventional programming approaches prove unsuitable to provide IoT resource constrained devices with the distributed processing capabilities required to implement intelligent, autonomic, and self-organizing behaviors. In our previous work, we had already proposed an alternative programming methodology for such systems that is characterized by high-level programming and symbolic expressions evaluation, and developed a lightweight middleware to support it. Our approach allows for interactive programming of deployed nodes, and it is based on the simple but e ective paradigm of executable code exchange among nodes. In this paper, we show how our methodology can be used to provide IoT resource constrained devices with reasoning abilities by implementing a Fuzzy Logic symbolic extension on deployed nodes at runtime.

  1. DOE high-level waste tank safety program Final report, Task 002

    International Nuclear Information System (INIS)

    1998-01-01

    The overall objective of the work on Task 002 was to provide LANL with support to the DOE High-Level Waste Tank Safety program. The objective of the work was to develop safety documentation in support of the unsafe tank mitigation activities at Hanford. The work includes the development of safety assessment and an environmental assessment. All tasks which were assigned under this Task Order were completed. Descriptions of the objectives of each task and effort performed to complete each objective are provided. The two tasks were: Task 2.1--safety assessment for instrumentation insertion; and Task 2.2--environmental assessment

  2. Parallel Volunteer Learning during Youth Programs

    Science.gov (United States)

    Lesmeister, Marilyn K.; Green, Jeremy; Derby, Amy; Bothum, Candi

    2012-01-01

    Lack of time is a hindrance for volunteers to participate in educational opportunities, yet volunteer success in an organization is tied to the orientation and education they receive. Meeting diverse educational needs of volunteers can be a challenge for program managers. Scheduling a Volunteer Learning Track for chaperones that is parallel to a…

  3. Contributions to computational stereology and parallel programming

    DEFF Research Database (Denmark)

    Rasmusson, Allan

    rotator, even without the need for isotropic sections. To meet the need for computational power to perform image restoration of virtual tissue sections, parallel programming on GPUs has also been part of the project. This has lead to a significant change in paradigm for a previously developed surgical...

  4. Program For Parallel Discrete-Event Simulation

    Science.gov (United States)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  5. Automatic compilation from high-level biologically-oriented programming language to genetic regulatory networks.

    Science.gov (United States)

    Beal, Jacob; Lu, Ting; Weiss, Ron

    2011-01-01

    The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes (~ 50%) and latency of the optimized engineered gene networks. Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems.

  6. Achieving behavioral control with millisecond resolution in a high-level programming environment.

    Science.gov (United States)

    Asaad, Wael F; Eskandar, Emad N

    2008-08-30

    The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.

  7. United States Program on Spent Nuclear Fuel and High-Level Radioactive Waste Management

    International Nuclear Information System (INIS)

    Stewart, L.

    2004-01-01

    The President signed the Congressional Joint Resolution on July 23, 2002, that designated the Yucca Mountain site for a proposed geologic repository to dispose of the nation's spent nuclear fuel (SNF) and high-level radioactive waste (HLW). The United States (U.S.) Department of Energy's (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is currently focusing its efforts on submitting a license application to the U.S. Nuclear Regulatory Commission (NRC) in December 2004 for construction of the proposed repository. The legislative framework underpinning the U.S. repository program is the basis for its continuity and success. The repository development program has significantly benefited from international collaborations with other nations in the Americas

  8. Experiences with High-Level Programming Directives for Porting Applications to GPUs

    International Nuclear Information System (INIS)

    Ding, Wei; Chapman, Barbara; Sankaran, Ramanan; Graham, Richard L.

    2012-01-01

    HPC systems now exploit GPUs within their compute nodes to accelerate program performance. As a result, high-end application development has become extremely complex at the node level. In addition to restructuring the node code to exploit the cores and specialized devices, the programmer may need to choose a programming model such as OpenMP or CPU threads in conjunction with an accelerator programming model to share and manage the difference node resources. This comes at a time when programmer productivity and the ability to produce portable code has been recognized as a major concern. In order to offset the high development cost of creating CUDA or OpenCL kernels, directives have been proposed for programming accelerator devices, but their implications are not well known. In this paper, we evaluate the state of the art accelerator directives to program several applications kernels, explore transformations to achieve good performance, and examine the expressiveness and performance penalty of using high-level directives versus CUDA. We also compare our results to OpenMP implementations to understand the benefits of running the kernels in the accelerator versus CPU cores.

  9. International program to study subseabed disposal of high-level radioactive wastes

    International Nuclear Information System (INIS)

    Carlin, E.M.; Hinga, K.R.; Knauss, J.A.

    1984-01-01

    This report provides an overview of the international program to study seabed disposal of nuclear wastes. Its purpose is to inform legislators, other policy makers, and the general public as to the history of the program, technological requirements necessary for feasibility assessment, legal questions involved, international coordination of research, national policies, and research and development activities. Each of these major aspects of the program is presented in a separate section. The objective of seabed burial, similar to its continental counterparts, is to contain and to isolate the wastes. The subseabed option should not be confuesed with past practices of ocean dumping which have introduced wastes into ocean waters. Seabed disposal refers to the emplacement of solidified high-level radioactive waste (with or without reprocessing) in certain geologically stable sediments of the deep ocean floor. Specially designed surface ships would transport waste canisters from a port facility to the disposal site. Canisters would be buried from a few tens to a few hundreds of meters below the surface of ocean bottom sediments, and hence would not be in contact with the overlying ocean water. The concept is a multi-barrier approach for disposal. Barriers, including waste form, canister, ad deep ocean sediments, will separate wastes from the ocean environment. High-level wastes (HLW) would be stabilized by conversion into a leach-resistant solid form such as glass. This solid would be placed inside a metallic canister or other type of package which represents a second barrier. The deep ocean sediments, a third barrier, are discussed in the Feasibility Assessment section. The waste form and canister would provide a barrier for several hundred years, and the sediments would be relied upon as a barrier for thousands of years. 62 references, 3 figures, 2 tables

  10. Four themes that underlie the high-level nuclear waste management program

    International Nuclear Information System (INIS)

    Sprecher, W.M.

    1989-01-01

    In 1982, after years of deliberation and in response to mounting pressures from environmental, industrial, and other groups, the US Congress enacted the Nuclear Waste Policy Act (NWPA) of 1982, which was signed into law by the President in January 1983. That legislation signified a major milestone in the nation's management of high-level nuclear waste, since it represented a consensus among the nation's lawmakers to tackle a problem that had evaded solution for decades. Implementation of the NWPA has proven to be exceedingly difficult, as attested by the discord generated by the US Department of Energy's (DOE's) geologic repository and monitored retrievable storage (MRS) facility siting activities. The vision that motivated the crafters of the 1982 act became blurred as opposition to the law increased. After many hearings that underscored the public's concern with the waste management program, the Congress enacted the Nuclear Waste Policy Amendments Act of 1987 (Amendments Act), which steamlined and focused the program, while establishing three independent bodies: the MRS Review Commission, the Nuclear Waste Technical Review Board, and the Office of the Nuclear Waste Negotiator. Yet, even as the program evolves, several themes characterizing the nation's effort to solve the waste management problem continue to prevail. The first of these themes has to do with social consciousness, and the others that follow deal with technical leadership, public involvement and risk perceptions, and program conservatism

  11. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  12. Indian program on management of high level radioactive waste - emphasis on value recovery for societal applications

    International Nuclear Information System (INIS)

    Kaushik, C.P.; Tomar, Neelima Singh; Kumar, Amar; Wadhwa, S.; Diwan, Jyoti

    2017-01-01

    Nuclear Power Programme in India is based on 'closed fuel cycle'. Closed fuel cycle involves reprocessing and recycling of Spent Nuclear Fuel (SNF) coming out of nuclear reactors. During reprocessing, uranium and plutonium, constituting bulk of the SNF are separated and subsequently recycled. The remaining small portion constitutes high level radioactive waste containing most of the fission products and minor actinides. A three-step strategy involving immobilization, interim storage followed by ultimate disposal has been adopted in India for management of High Level Waste (HLW). Borosilicate glass matrix has been identified for immobilization of HLW owing to optimal waste loading, adequate leach resistance and long term stability of the product. An interim storage facility is in operation for storage and surveillance of VWP. A comprehensive program based on screening of different materials like granite, argillite, clay with respect to sorption of different radionuclides is being pursued to identify the suitable areas of disposal of the conditioned waste products. Separation of useful radionuclides like "1"3"7Cs, "9"0Sr, "9"0Y, "1"0"6Ru etc and its utilization for societal applications is being practiced in India. (author)

  13. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  14. Nuclear waste. DOE's program to prepare high-level radioactive waste for final disposal

    International Nuclear Information System (INIS)

    Bannerman, Carl J.; Owens, Ronald M.; Dowd, Leonard L.; Herndobler, Christopher S.; Purvine, Nancy R.; Stenersen, Stanley G.

    1989-11-01

    years later than the schedule established in early 1984, and the cost could be about $1.1 billion, more than double the 1984 cost estimate. DOE has plans for immobilization facilities at the other two Sites, but unresolved issues could affect the reliability of current cost and schedule estimates; the Hanford facility, currently in the design phase, has an estimated immobilization completion date of 2008, but this date assumes that Hanford's defense mission nuclear processing activities will end in the mid 1990s and only the waste stored in Hanford's double-shell tanks will be immobilized; the INEL facility is currently in such an early planning phase that DOE has not yet selected the waste immobilization technology that it will use. The waste may be transformed into a glass-ceramic or other material instead of being vitrified. DOE expects to make this decision in 1993. Section 1 contains an overview of DOE's high-level waste immobilization program. Sections 2 through 5 contain more detailed information about each of the four projects

  15. High-level waste management research and development program at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Blomeke, J.O.; Bond, W.D.

    1976-01-01

    Projections of wastes to be generated through the year 2000 portend a problem of impressive size and complexity but one which can be handled within the framework of current and planned investigative programs. Investigations of the technical feasibility of removing actinide elements from wastes to render the residuals more manageable in terms of hazards and storage requirements indicate that they can be removed from wastes by the minimally desired factors of 10 2 to 10 4 ; however, demonstrations and engineering assessments of chemical flowsheets have yet to be made. Natural salt formations are believed to offer the best prospects for disposal of high-level wastes; other promising geological formations are also being evaluated for their suitability for use in the disposal of wastes

  16. FPGA Implementation of Blue Whale Calls Classifier Using High-Level Programming Tool

    Directory of Open Access Journals (Sweden)

    Mohammed Bahoura

    2016-02-01

    Full Text Available In this paper, we propose a hardware-based architecture for automatic blue whale calls classification based on short-time Fourier transform and multilayer perceptron neural network. The proposed architecture is implemented on field programmable gate array (FPGA using Xilinx System Generator (XSG and the Nexys-4 Artix-7 FPGA board. This high-level programming tool allows us to design, simulate and execute the compiled design in Matlab/Simulink environment quickly and easily. Intermediate signals obtained at various steps of the proposed system are presented for typical blue whale calls. Classification performances based on the fixed-point XSG/FPGA implementation are compared to those obtained by the floating-point Matlab simulation, using a representative database of the blue whale calls.

  17. Status of the United States' high-level nuclear waste disposal program

    International Nuclear Information System (INIS)

    Rusche, B.

    1985-01-01

    The Nuclear Waste Policy Act of 1982 is a remarkable piece of legislation in that there is general agreement on its key provisions. Nevertheless, this is a program intended to span more than a century, with some choices by Congress, states, Indian tribes and the nuclear power industry yet to be made. The crafters of the Act clearly recognized this. And further, the crafters recognized ''. . .that. . .state, Indian tribe and public participation in the planning and development of repositories is essential in order to promote public confidence in the safety of disposal of such waste and spent fuel . . . High-level radioactive waste and spent nuclear fuel have become major subjects of public concern, and appropriate precautions must be taken to ensure that such waste and spent fuel do not adversely affect the public health and safety and the environment for this or future generations

  18. Strategic program for deep geological disposal of high level radioactive waste in China

    International Nuclear Information System (INIS)

    Wang Ju

    2004-01-01

    A strategic program for deep geological disposal of high level radioactive waste in China is proposed in this paper. A '3-step technical strategy': site selection and site characterization-site specific underground research laboratory-final repository, is proposed for the development of China's high level radioactive waste repository. The activities related with site selection and site characterization for the repository can be combined with those for the underground research laboratory. The goal of the strategy is to build China's repository around 2040, while the activities can be divided into 4 phases: 1) site selection and site characterization; 2) site confirmation and construction of underground research laboratory, 3) in-situ experiment and disposal demonstration, and 4) construction of repository. The targets and tasks for each phase are proposed. The logistic relationship among the activities is discussed. It is pointed out that the site selection and site characterization provide the basis for the program, the fundamental study and underground research laboratory study are the key support, the performance assessment plays a guiding role, while the construction of a qualified repository is the final goal. The site selection can be divided into 3 stages: comparison among pre-selected areas, comparison among pre-selected sites and confirmation of the final site. According to this strategy, the final site for China's underground research laboratory and repository will be confirmed in 2015, where the construction of an underground laboratory will be started. In 2025 the underground laboratory will have been constructed, while in around 2040, the construction of a final repository is to be completed

  19. High-level waste program management: A ratepayers' and regulatory perspective

    International Nuclear Information System (INIS)

    Anderson, E.G.

    1986-01-01

    The nation's electric utility regulators have joined the effort to enhance the federal project to dispose of high-level nuclear waste. Because all financial support comes from ratepayers, the National Association of Regulatory Utility Commissioners (NARUC), through the mechanism of a subcommittee, seeks to investigate and monitor the federal program to provide to the Congress and the U.S. Department of Energy (DOE) the NARUC's unique expertise. Its views to enhance program management and improve cost control are its central contribution. While conveying no lack of confidence in the federal management, the NARUC is imparting its relevant experience derived from review of nuclear power plant construction and cost control. Recommendations are made for more cost-effective program direction and views on its management are given. Financial control, public input and cost responsibilities for disposal of defense and commercial wastes are separately identified. Needs for the DOE's heightened insight into and development of the monitored retrievable storage proposal to the Congress are described. Finally, with a warning that there exists a limit to ratepayer funding of this effort, the request is made for Congressional cost-control hearings and for expanded dialogue between the Department of Engery and financially responsible parties

  20. Parallel Evolution of High-Level Aminoglycoside Resistance in Escherichia coli Under Low and High Mutation Supply Rates

    Directory of Open Access Journals (Sweden)

    Claudia Ibacache-Quiroga

    2018-03-01

    Full Text Available Antibiotic resistance is a major concern in public health worldwide, thus there is much interest in characterizing the mutational pathways through which susceptible bacteria evolve resistance. Here we use experimental evolution to explore the mutational pathways toward aminoglycoside resistance, using gentamicin as a model, under low and high mutation supply rates. Our results show that both normo and hypermutable strains of Escherichia coli are able to develop resistance to drug dosages > 1,000-fold higher than the minimal inhibitory concentration for their ancestors. Interestingly, such level of resistance was often associated with changes in susceptibility to other antibiotics, most prominently with increased resistance to fosfomycin. Whole-genome sequencing revealed that all resistant derivatives presented diverse mutations in five common genetic elements: fhuA, fusA and the atpIBEFHAGDC, cyoABCDE, and potABCD operons. Despite the large number of mutations acquired, hypermutable strains did not pay, apparently, fitness cost. In contrast to recent studies, we found that the mutation supply rate mainly affected the speed (tempo but not the pattern (mode of evolution: both backgrounds acquired the mutations in the same order, although the hypermutator strain did it faster. This observation is compatible with the adaptive landscape for high-level gentamicin resistance being relatively smooth, with few local maxima; which might be a common feature among antibiotics for which resistance involves multiple loci.

  1. Environmental program overview for a high-level radioactive waste repository at Yucca Mountain

    International Nuclear Information System (INIS)

    1988-12-01

    The United States plans to begin operating the first repository for the permanent disposal of high-level nuclear waste early in the next century. In February 1983, the US Department of Energy (DOE) identified Yucca Mountain, in Nevada, as one of nine potentially acceptable sites for a repository. To determine its suitability, the DOE evaluated the Yucca Mountain site, along with eight other potentially acceptable sites, in accordance with the DOE's General Guidelines for the Recommendation of Sites for the Nuclear Waste Repositories. The purpose of the Environmental Program Overview (EPO) for the Yucca Mountain site is to provide an overview of the overall, comprehensive approach being used to satisfy the environmental requirements applicable to sitting a repository at Yucca Mountain. The EPO states how the DOE will address the following environmental areas: aesthetics, air quality, cultural resources (archaeological and Native American components), noise, radiological studies, soils, terrestrial ecosystems, and water resources. This EPO describes the environmental program being developed for the sitting of a repository at Yucca Mountain. 1 fig., 3 tabs

  2. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  3. Program of Hanford high-level waste retrieval task: a narrative description

    International Nuclear Information System (INIS)

    Wallskog, H.A.

    1976-12-01

    The objective of this task is to develop and demonstrate the equipment and methods for the retrieval of high-level radioactive wastes from underground storage tanks at Hanford. The approach will be to continue with engineering studies and the conceptual design in progress and follow on with the engineering design, construction, testing and demonstration of a Prototype Retrieval System. This system will consist of a large, mobile platform providing the support and control of an articulated arm used to remotely position waste recovery/removal tools. Other major components include the equipment needed to bring the material up to the platform for packaging and subsequent transport to a processing facility, and the television viewing and lighting subsystem. This prototype system will be functionally complete and will contain items such as a control center, tool change and maintenance/repair capability, etc. The program includes a complete non-radioactive demonstration of the system in a mock waste tank as well as a radioactive demonstration involving one or more waste tanks

  4. Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs

    Science.gov (United States)

    Moreno, R.; Bazán, A. M.

    2017-10-01

    The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.

  5. Development of parallel/serial program analyzing tool

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Nagao, Saichi; Takigawa, Yoshio; Kumakura, Toshimasa

    1999-03-01

    Japan Atomic Energy Research Institute has been developing 'KMtool', a parallel/serial program analyzing tool, in order to promote the parallelization of the science and engineering computation program. KMtool analyzes the performance of program written by FORTRAN77 and MPI, and it reduces the effort for parallelization. This paper describes development purpose, design, utilization and evaluation of KMtool. (author)

  6. United States high-level radioactive waste management program: Current status and plans

    International Nuclear Information System (INIS)

    Williams, J.

    1992-01-01

    The inventory of spent fuel in storage at reactor sites in the United States is approximately 20,000 metric tons heavy metal (MTHM). It is increasing at a rate of 1700 to 2100 MTHM per year. According to current projections, by the time the last license for the current generation of nuclear reactors expires, there will be an estimated total of 84,000 MTHm. No commercial reprocessing capacity exists or is planned in the US. Therefore, the continued storage of spent fuel is required. The majority of spent fuel remains in the spent fuel pools of the utilities that generated it. Three utilities are presently supplementing pool capacity with on-site dry storage technologies, and four others are planning dry storage. Commercial utilities are responsible for managing their spent fuel until the Federal waste management system, now under development, accepts spent fuel for storage and disposal. Federal legislation charges the Office of Civilian Radioactive Waste Management (OCRWM) within the US Department of Energy (DOE) with responsibility for developing a system to permanently dispose of spent fuel and high level radioactive waste in a manner that protects the health and safety of the public and the quality of the environment. We are developing a waste management system consisting for three components: a mined geologic repository, with a projected start date of 2010; a monitored retrievable storage facility (MRS), scheduled to begin waste acceptance in 1998; and a transportation system to support MRS and repository operations. This paper discusses the background and framework for the program, as well as the current status and plans for management of spent nuclear fuel at commercial utilities; the OCRWM's development of a permanent geologic repository, an MRS, and a transportation system; the OCRWM's safety approach; the OCRWM's program management initiatives; and the OCRWM's external relations activities

  7. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  8. High level programming for the control of a tele operating mobile robot and with line following

    International Nuclear Information System (INIS)

    Bernal U, E.

    2006-01-01

    The TRASMAR automated vehicle was built with the purpose of transporting radioactive materials, it has a similar kinematic structure to that of a tricycle, in where the front wheel is the one in charge of offering the traction and direction, both rear wheels rotate freely and they are subject to a common axle. The electronic design was carried out being based on a MC68HC811 micro controller of the Motorola company. Of the characteristics that the robot possesses it stands out that it counts with an obstacle perception system through three ultrasonic sensors located in the front part of the vehicle to avoid collisions. The robot has two operation modes, the main mode is the manual, manipulated through a control by infrareds, although it can also move in autonomous way by means of the line pursuit technique using two reflective infrared sensors. As any other electronic system, the mobile robot required of improvements and upgrades. The modifications to be carried out were focused to the control stage. Its were intended as elements of upgrade the incorporation of the MC68HC912B32 micro controller and to replace the assembler language characteristic of this type of systems, by a high level language for micro controllers of this type, in this case the FORTH. In a same way it was implemented inside the program the function of the robot's displacement in an autonomous way by means of the line pursuit technique using control with fuzzy logic. The carried out work is distributed in the following way: In the chapter 1 the robot's characteristics are mentioned, as well as the objectives that thought about to the beginning of the project and the justifications that motivated the realization of this upgrade. In the chapters 2 at 5 are presented in a theoretical way the supports used for the the robot's upgrade, as the used modules of the micro controller, those main characteristics of the FORTH language, the theory of the fuzzy logic and the design of the stage of power that

  9. International program: Feasibility of the evacuation of high-level radioactive wastes under the ocean depths. (Seabed Program)

    International Nuclear Information System (INIS)

    Barbreau, A.

    1990-01-01

    The Seabed feasibility program is an international scientific program of research on the feasibility of the disposal of high-level radioactive wastes into the geological formations making up the floor of the great abyssal plains of the oceans. Decided in 1977, the program is aimed at answering the three following questions: 1) are there potentially favourable sites. 2) is the disposal of wastes possible. 3) does the operation present safety guarantees. First initiated by four countries (USA, UK, Japan and France), the program sponsored by the OECD nuclear energy agency was gathering ten countries and the Commission of the European communities in 1988. The techniques of waste disposal by means of drilling in consolidated sediments and penetrators in loose sediments have been studied. The penetrator technique has been the most thoroughly studied, especially through in situ experiments in the Atlantic ocean. The various factors affecting safety have been studied and the radiological consequences of a burial operation assessed through models. It has been concluded that such an operation could be carried out technically under quite satisfying conditions [fr

  10. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    Science.gov (United States)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  11. Parallelization for first principles electronic state calculation program

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Oguchi, Tamio.

    1997-03-01

    In this report we study the parallelization for First principles electronic state calculation program. The target machines are NEC SX-4 for shared memory type parallelization and FUJITSU VPP300 for distributed memory type parallelization. The features of each parallel machine are surveyed, and the parallelization methods suitable for each are proposed. It is shown that 1.60 times acceleration is achieved with 2 CPU parallelization by SX-4 and 4.97 times acceleration is achieved with 12 PE parallelization by VPP 300. (author)

  12. The Michigan high-level radioactive waste program: Final technical progress report

    International Nuclear Information System (INIS)

    1987-01-01

    This report comprises the state of Michigan's final technical report on the location of a proposed high-level radioactive waste disposal site. Included are a list of Michigan's efforts to review the DOE proposal and a detailed report on the application of geographic information systems analysis techniques to the review process

  13. China's deep geological disposal program for high level radioactive waste, background and status 1998

    International Nuclear Information System (INIS)

    Ju Wang; Xu Guoqing; Guo Yonghai

    2001-01-01

    This paper presents the background and progress made in the study of China's high level radioactive waste, including site screening, site evaluation, the study on radionuclide migration, bentonite, natural analogue studies, and performance assessment, etc. The study on Beishan area, the potential area for China's geological repository, is also presented in this paper. (author)

  14. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  15. Human factors programs for high-level radioactive waste handling systems

    International Nuclear Information System (INIS)

    Pond, D.J.

    1992-01-01

    Human Factors is the discipline concerned with the acquisition of knowledge about human capabilities and limitations, and the application of such knowledge to the design of systems. This paper discusses the range of human factors issues relevant to high-level radioactive waste (HLRW) management systems and, based on examples form other organizations, presents mechanisms through which to assure application of such expertise in the safe, efficient, and effective management and disposal of high-level waste. Additionally, specific attention is directed toward consideration of who might be classified as a human factors specialist, why human factors expertise is critical to the success of the HLRW management system, and determining when human factors specialists should become involved in the design and development process

  16. Human factors programs for high-level radioactive waste handling systems

    International Nuclear Information System (INIS)

    Pond, D.J.

    1992-04-01

    Human Factors is the discipline concerned with the acquisition of knowledge about human capabilities and limitations, and the application of such knowledge to the design of systems. This paper discusses the range of human factors issues relevant to high-level radioactive waste (HLRW) management systems and, based on examples from other organizations, presents mechanisms through which to assure application of such expertise in the safe, efficient, and effective management and disposal of high-level waste. Additionally, specific attention is directed toward consideration of who might be classified as a human factors specialist, why human factors expertise is critical to the success of the HLRW management system, and determining when human factors specialists should become involved in the design and development process

  17. Hanford High-Level Waste Vitrification Program at the Pacific Northwest National Laboratory: technology development - annotated bibliography

    International Nuclear Information System (INIS)

    Larson, D.E.

    1996-09-01

    This report provides a collection of annotated bibliographies for documents prepared under the Hanford High-Level Waste Vitrification (Plant) Program. The bibliographies are for documents from Fiscal Year 1983 through Fiscal Year 1995, and include work conducted at or under the direction of the Pacific Northwest National Laboratory. The bibliographies included focus on the technology developed over the specified time period for vitrifying Hanford pretreated high-level waste. The following subject areas are included: General Documentation; Program Documentation; High-Level Waste Characterization; Glass Formulation and Characterization; Feed Preparation; Radioactive Feed Preparation and Glass Properties Testing; Full-Scale Feed Preparation Testing; Equipment Materials Testing; Melter Performance Assessment and Evaluations; Liquid-Fed Ceramic Melter; Cold Crucible Melter; Stirred Melter; High-Temperature Melter; Melter Off-Gas Treatment; Vitrification Waste Treatment; Process, Product Control and Modeling; Analytical; and Canister Closure, Decontamination, and Handling

  18. Integrating the commercial and defense high level waste programs - A utility perspective

    International Nuclear Information System (INIS)

    Tomonto, J.R.

    1986-01-01

    The Nuclear Waste Policy Act of 1982 provided that disposal of high-level wastes resulting from defense activities be included in the authorized repository unless the President determined that separate facilities are required. President Reagan approved commingling of defense and civilian wastes on April 30, 1985. The impacts of this decision on the repository schedule, civilian spent fuel acceptance rates, and cost sharing are reviewed and recommendations for resolving these issues are presented

  19. High-Level Management of Communication Schedules in HPF-like Languages

    National Research Council Canada - National Science Library

    Benkner, Siegfried

    1997-01-01

    ..., providing the users with a high-level language interface for programming scalable parallel architectures and delegating to the compiler the task of producing an explicitly parallel message-passing program...

  20. Foreign programs for the storage of spent nuclear power plant fuels, high-level waste canisters and transuranic wastes

    International Nuclear Information System (INIS)

    Harmon, K.M.; Johnson, A.B. Jr.

    1984-04-01

    The various national programs for developing and applying technology for the interim storage of spent fuel, high-level radioactive waste, and TRU wastes are summarized. Primary emphasis of the report is on dry storage techniques for uranium dioxide fuels, but data are also provided concerning pool storage

  1. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    Directory of Open Access Journals (Sweden)

    Xing Cai

    2005-01-01

    Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.

  2. Three dimensional Burn-up program parallelization using socket programming

    International Nuclear Information System (INIS)

    Haliyati R, Evi; Su'ud, Zaki

    2002-01-01

    A computer parallelization process was built with a purpose to decrease execution time of a physics program. In this case, a multi computer system was built to be used to analyze burn-up process of a nuclear reactor. This multi computer system was design need using a protocol communication among sockets, i.e. TCP/IP. This system consists of computer as a server and the rest as clients. The server has a main control to all its clients. The server also divides the reactor core geometrically to in parts in accordance with the number of clients, each computer including the server has a task to conduct burn-up analysis of 1/n part of the total reactor core measure. This burn-up analysis was conducted simultaneously and in a parallel way by all computers, so a faster program execution time was achieved close to 1/n times that of one computer. Then an analysis was carried out and states that in order to calculate the density of atoms in a reactor of 91 cm x 91 cm x 116 cm, the usage of a parallel system of 2 computers has the highest efficiency

  3. Experimental and analytical study for demonstration program on shielding of casks for high-level wastes

    International Nuclear Information System (INIS)

    Ueki, K.; Nakazawa, M.; Hattorl, S.; Ozaki, S.; Tamaki, H.; Kadotani, H.; Ishizuka, T.; Ishikawa, S.

    1993-01-01

    The following remarks were obtained from the experiment and the DOT 3.5 and the MCNP analyses on the gamma ray and the neutron dose equivalent rates in the cask of interest. 1. The cask has thinner neutron shielding parts around the trunnions. Significant neutrons streaming around the trunnion parts was observed which was also cleared by the MCNP analysis for the 252 Cf source experiment. Accordingly, detailed neutron streaming calculations are required to evaluate the dose levels around the trunnions when loading the vitrified high-level wastes. 2. The room-scattered obstructive neutrons, mainly originating from the neutrons penetrating around the trunnions, at the top and the bottom of the cask are reduced significantly by preparing the water tank at the top and the water layer at the bottom. Therefore, a more accurate experiment is to be carried out in the future shielding experiment especially for neutrons. However, because the water tank and the layer do not exist in the actual high-level wastes transport cask, the experiment without the water tank and layer are not dispensable to demonstrate the transport conditions of the actual cask, too. 3. The gamma-ray and the neutron dose equivalent rate distributions obtained from the DOT 3.5 and the MCNP calculations, respectively, agreed closely with the measured values in the cask areas of interest. Accordingly, the DOT 3.5 code and the MCNP code with the NESX estimator can be employed not only for the shielding analysis of the future experiments, but also for making a safety analysis report of high-level wastes transport casks. (J.P.N.)

  4. Overall review strategy for the Nuclear Regulatory Commission's High-Level Waste Repository Program

    International Nuclear Information System (INIS)

    Johnson, R.L.

    1994-11-01

    The Overall Review Strategy gives general guidance to the Nuclear Regulatory Commission staff for conducting it's license application and pre-license application reviews. These reviews are in support of the Commission's construction authorization decision for a geologic repository for the disposal of high-level radioactive waste. Objectives and strategies are defined that focus the staff's reviews on determining compliance with requirements of 10 CFR Part 60. These strategies define how the staff prioritizes its reviews on those key technical uncertainties considered to be most important to repository performance. Strategies also give guidance for developing, in an integrated way, the License Application Review Plan together with supporting performance assessments, analyses, and research

  5. Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs

    Directory of Open Access Journals (Sweden)

    Khaled Jerbi

    2012-01-01

    Full Text Available In this paper, we introduce the Reconfigurable Video Coding (RVC standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL. CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

  6. NRC assessment of the high-level waste repository quality assurance program

    International Nuclear Information System (INIS)

    Kennedy, J.E.

    1987-01-01

    As part of its licensing responsibilities, the NRC is independently reviewing the DOE quality assurance program applied to the site characterization phase activities. Data collected and other information generated during this phase of the program will ultimately be used in a license application to demonstrate the suitability of one site for long-term isolation of waste. They must therefore fall under the quality assurance program to provide confidence in their adequacy. This NRC review consists of three main activities: development of staff guidance on quality assurance measures appropriate for site characterization activities; review of DOE QA plans and procedures; and audits and other reviews of the implementation of the program

  7. Programming parallel architectures - The BLAZE family of languages

    Science.gov (United States)

    Mehrotra, Piyush

    1989-01-01

    This paper gives an overview of the various approaches to programming multiprocessor architectures that are currently being explored. It is argued that two of these approaches, interactive programming environments and functional parallel languages, are particularly attractive, since they remove much of the burden of exploiting parallel architectures from the user. This paper also describes recent work in the design of parallel languages. Research on languages for both shared and nonshared memory multiprocessors is described.

  8. Introducing heterogeneity in Monte Carlo models for risk assessments of high-level nuclear waste. A parallel implementation of the MLCRYSTAL code

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, M.

    1996-09-01

    We have introduced heterogeneity to an existing model as a special feature and simultaneously extended the model from 1D to 3D. Briefly, the code generates stochastic fractures in a given geosphere. These fractures are connected in series to form one pathway for radionuclide transport from the repository to the biosphere. Rock heterogeneity is realized by simulating physical and chemical properties for each fracture, i.e. these properties vary along the transport pathway (which is an ensemble of all fractures serially connected). In this case, each Monte Carlo simulation involves a set of many thousands of realizations, one for each pathway. Each pathway can be formed by approx. 100 fractures. This means that for a Monte Carlo simulation of 1000 realizations, we need to perform a total of 100,000 simulations. Therefore the introduction of heterogeneity has increased the CPU demands by two orders of magnitude. To overcome the demand for CPU, the program, MLCRYSTAL, has been implemented in a parallel workstation environment using the MPI, Message Passing Interface, and later on ported to an IBM-SP2 parallel supercomputer. The program is presented here and a preliminary set of results is given with the conclusions that can be drawn. 3 refs, 12 figs.

  9. Introducing heterogeneity in Monte Carlo models for risk assessments of high-level nuclear waste. A parallel implementation of the MLCRYSTAL code

    International Nuclear Information System (INIS)

    Andersson, M.

    1996-09-01

    We have introduced heterogeneity to an existing model as a special feature and simultaneously extended the model from 1D to 3D. Briefly, the code generates stochastic fractures in a given geosphere. These fractures are connected in series to form one pathway for radionuclide transport from the repository to the biosphere. Rock heterogeneity is realized by simulating physical and chemical properties for each fracture, i.e. these properties vary along the transport pathway (which is an ensemble of all fractures serially connected). In this case, each Monte Carlo simulation involves a set of many thousands of realizations, one for each pathway. Each pathway can be formed by approx. 100 fractures. This means that for a Monte Carlo simulation of 1000 realizations, we need to perform a total of 100,000 simulations. Therefore the introduction of heterogeneity has increased the CPU demands by two orders of magnitude. To overcome the demand for CPU, the program, MLCRYSTAL, has been implemented in a parallel workstation environment using the MPI, Message Passing Interface, and later on ported to an IBM-SP2 parallel supercomputer. The program is presented here and a preliminary set of results is given with the conclusions that can be drawn. 3 refs, 12 figs

  10. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  11. On the Automatic Parallelization of Sparse and Irregular Fortran Programs

    Directory of Open Access Journals (Sweden)

    Yuan Lin

    1999-01-01

    Full Text Available Automatic parallelization is usually believed to be less effective at exploiting implicit parallelism in sparse/irregular programs than in their dense/regular counterparts. However, not much is really known because there have been few research reports on this topic. In this work, we have studied the possibility of using an automatic parallelizing compiler to detect the parallelism in sparse/irregular programs. The study with a collection of sparse/irregular programs led us to some common loop patterns. Based on these patterns new techniques were derived that produced good speedups when manually applied to our benchmark codes. More importantly, these parallelization methods can be implemented in a parallelizing compiler and can be applied automatically.

  12. Survey on present status and trend of parallel programming environments

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Higuchi, Kenji; Honma, Ichiro; Ohta, Hirofumi; Kawasaki, Takuji; Imamura, Toshiyuki; Koide, Hiroshi; Akimoto, Masayuki.

    1997-03-01

    This report intends to provide useful information on software tools for parallel programming through the survey on parallel programming environments of the following six parallel computers, Fujitsu VPP300/500, NEC SX-4, Hitachi SR2201, Cray T94, IBM SP, and Intel Paragon, all of which are installed at Japan Atomic Energy Research Institute (JAERI), moreover, the present status of R and D's on parallel softwares of parallel languages, compilers, debuggers, performance evaluation tools, and integrated tools is reported. This survey has been made as a part of our project of developing a basic software for parallel programming environment, which is designed on the concept of STA (Seamless Thinking Aid to programmers). (author)

  13. The BLAZE language - A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  14. The BLAZE language: A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  15. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  16. The ramifications of a delay in the national high-level waste repository program

    International Nuclear Information System (INIS)

    Vance, S.A.

    1988-05-01

    This thesis examines the ramifications to the nuclear power industry if a national high-level waste repository is not operational by 1998 as mandated in the Nuclear Waste Policy Act. The principal effect of a delay examined here is the potential shortage of spent fuel storage. In order to assess this impact, a computer model of a nuclear utility was developed. Data for 107 US reactors was then entered into the model to assess the impact for individual facilities. This model estimates that a delay to the year 2003 will cost industry between $21.4 million and $35.8 million in 1988 dollars. Similarly, a delay to the year 2010 is estimated to have between a $85.4 million and $142.4 million impact. Four other potential effects of a delay on industry are also examined: the potential inadequacy of the Nuclear Waste Fund; an increased difficulty in obtaining licenses from the Nuclear Regulatory Commission; increased friction between industry and the Department of Energy; and a decline in public acceptance of nuclear power. This thesis also presents a framework for developing a policy to deal with the potential effects of a delay. An argument is made for a policy which includes anticipation, participation, and education. 15 refs., 6 figs., 3 tabs

  17. Abs: a high-level modeling language for cloud-aware programming

    NARCIS (Netherlands)

    N. Bezirgiannis (Nikolaos); F.S. de Boer (Frank)

    2016-01-01

    textabstractCloud technology has become an invaluable tool to the IT business, because of its attractive economic model. Yet, from the programmers’ perspective, the development of cloud applications remains a major challenge. In this paper we introduce a programming language that allows Cloud

  18. Hanford Waste Vitrification Plant Quality Assurance Program description for defense high-level waste form development and qualification

    International Nuclear Information System (INIS)

    Hand, R.L.

    1992-01-01

    This document describes the quality assurance (QA) program of the Hanford Waste Vitrification Plant (HWVP) Project. The purpose of the QA program is to control project activities in such a manner as to achieve the mission of the HWVP Project in a safe and reliable manner. A major aspect of the HWVP Project QA program is the control of activities that relate to high-level waste (HLW) form development and qualification. This document describes the program and planned actions the Westinghouse Hanford Company (Westinghouse Hanford) will implement to demonstrate and ensure that the HWVP Project meets the US Department of Energy (DOE) and ASME regulations. The actions for meeting the requirements of the Waste Acceptance Preliminary Specifications (WAPS) will be implemented under the HWVP product qualification program with the objective of ensuring that the HWVP and its processes comply with the WAPS established by the federal repository

  19. A Model for Speedup of Parallel Programs

    Science.gov (United States)

    1997-01-01

    Sanjeev. K Setia . The interaction between mem- ory allocation and adaptive partitioning in message- passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [15] Sanjeev K. Setia and Satish K. Tripathi. A compar- ative analysis of static

  20. RPython high-level synthesis

    Science.gov (United States)

    Cieszewski, Radoslaw; Linczuk, Maciej

    2016-09-01

    The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.

  1. An environment for parallel structuring of Fortran programs

    International Nuclear Information System (INIS)

    Sridharan, K.; McShea, M.; Denton, C.; Eventoff, B.; Browne, J.C.; Newton, P.; Ellis, M.; Grossbard, D.; Wise, T.; Clemmer, D.

    1990-01-01

    The paper describes and illustrates an environment for interactive support of the detection and implementation of macro-level parallelism in Fortran programs. The approach couples algorithms for dependence analysis with both innovative techniques for complexity management and capabilities for the measurement and analysis of the parallel computation structures generated through use of the environment. The resulting environment is complementary to the more common approach of seeking local parallelism by loop unrolling, either by an automatic compiler or manually. (orig.)

  2. Programming parallel architectures: The BLAZE family of languages

    Science.gov (United States)

    Mehrotra, Piyush

    1988-01-01

    Programming multiprocessor architectures is a critical research issue. An overview is given of the various approaches to programming these architectures that are currently being explored. It is argued that two of these approaches, interactive programming environments and functional parallel languages, are particularly attractive since they remove much of the burden of exploiting parallel architectures from the user. Also described is recent work by the author in the design of parallel languages. Research on languages for both shared and nonshared memory multiprocessors is described, as well as the relations of this work to other current language research projects.

  3. NRC high-level radioactive waste program. Annual progress report: Fiscal Year 1996

    International Nuclear Information System (INIS)

    Sagar, B.

    1997-01-01

    This annual status report for fiscal year 1996 documents technical work performed on ten key technical issues (KTI) that are most important to performance of the proposed geologic repository at Yucca Mountain. This report has been prepared jointly by the staff of the Nuclear Regulatory Commission (NRC) Division of Waste Management and the Center for Nuclear Waste Regulatory Analyses. The programmatic aspects of restructuring the NRC repository program in terms of KTIs is discussed and a brief summary of work accomplished is provided. The other ten chapters provide a comprehensive summary of the work in each KTI. Discussions on probability of future volcanic activity and its consequences, impacts of structural deformation and seismicity, the nature of of the near-field environment and its effects on container life and source term, flow and transport including effects of thermal loading, aspects of repository design, estimates of system performance, and activities related to the U.S. Environmental Protection Agency standard are provided

  4. Nuclear-waste-package program for high-level isolation in Nevada tuff

    International Nuclear Information System (INIS)

    Rothman, A.J.

    1982-01-01

    The objective of the waste package program is to insure that a package is designed suitable for a repository in tuff that meets performance requirements of the NRC. In brief, the current (draft) regulation requires that the radionuclides be contained in the engineered system for 1000 years, and that, thereafter, no more than one part in 10 5 of the nuclides per year leave the boundary of the system. Studies completed as of this writing are thermal modeling of waste packages in a tuff repository and analysis of sodium bentonite as a potential backfill material. Both studies will be presented. Thermal calculations coupled with analysis of the geochemical literature on bentonite indicate that extensive chemical and physical alteration of bentonite would result at the high power densities proposed (ca. 2 kW/package and an area density of 25 W/m 2 ), in part due to compacted bentonite's relatively low thermal conductivity when dehydrated (approx. 0.6 +- 0.2 W/m 0 C). Because our groundwater contains K + , an upper hydrothermal temperature limit appears to be 120 to 150 0 C. At much lower power densities (less than 1 kW per package and an areal density of 12 W/m 2 ), bentonite may be suitable

  5. Hanford Waste Vitrification Plant Quality Assurance Program description for high-level waste form development and qualification

    International Nuclear Information System (INIS)

    1993-08-01

    The Hanford Waste Vitrification Plant Project has been established to convert the high-level radioactive waste associated with nuclear defense production at the Hanford Site into a waste form suitable for disposal in a deep geologic repository. The Hanford Waste Vitrification Plant will mix processed radioactive waste with borosilicate material, then heat the mixture to its melting point (vitrification) to forin a glass-like substance that traps the radionuclides in the glass matrix upon cooling. The Hanford Waste Vitrification Plant Quality Assurance Program has been established to support the mission of the Hanford Waste Vitrification Plant. This Quality Assurance Program Description has been written to document the Hanford Waste Vitrification Plant Quality Assurance Program

  6. Integrated Task And Data Parallel Programming: Language Design

    Science.gov (United States)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated

  7. A Programming Environment for Parallel Vision Algorithms

    Science.gov (United States)

    1990-04-11

    industrial arm on the market , while the unique head was designed by Rochester’s Computer Science and Mechanical Engineering Departments. 9a 4.1 Introduction...R. Constraining-Unification and the Programming Language Unicorn . In Logic Programming, Functions, Relations, and Equations, Degroot and Lind- strom

  8. Characterizing and Mitigating Work Time Inflation in Task Parallel Programs

    Directory of Open Access Journals (Sweden)

    Stephen L. Olivier

    2013-01-01

    Full Text Available Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems. Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.

  9. Declarative Parallel Programming in Spreadsheet End-User Development

    DEFF Research Database (Denmark)

    Biermann, Florian

    2016-01-01

    Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations. In this liter...... can directly apply results from functional array programming to a spreadsheet model of computations.......Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations....... In this literature study, we provide an overview of the publications on spreadsheet end-user programming and declarative array programming to inform further research on parallel programming in spreadsheets. Our results show that there is a clear overlap between spreadsheet programming and array programming and we...

  10. Parallel adaptation of a vectorised quantumchemical program system

    International Nuclear Information System (INIS)

    Van Corler, L.C.H.; Van Lenthe, J.H.

    1987-01-01

    Supercomputers, like the CRAY 1 or the Cyber 205, have had, and still have, a marked influence on Quantum Chemistry. Vectorization has led to a considerable increase in the performance of Quantum Chemistry programs. However, clockcycle times more than a factor 10 smaller than those of the present supercomputers are not to be expected. Therefore future supercomputers will have to depend on parallel structures. Recently, the first examples of such supercomputers have been installed. To be prepared for this new generation of (parallel) supercomputers one should consider the concepts one wants to use and the kind of problems one will encounter during implementation of existing vectorized programs on those parallel systems. The authors implemented four important parts of a large quantumchemical program system (ATMOL), i.e. integrals, SCF, 4-index and Direct-CI in the parallel environment at ECSEC (Rome, Italy). This system offers simulated parallellism on the host computer (IBM 4381) and real parallellism on at most 10 attached processors (FPS-164). Quantumchemical programs usually handle large amounts of data and very large, often sparse matrices. The transfer of that many data can cause problems concerning communication and overhead, in view of which shared memory and shared disks must be considered. The strategy and the tools that were used to parallellise the programs are shown. Also, some examples are presented to illustrate effectiveness and performance of the system in Rome for these type of calculations

  11. How to Shape a Successful Repository Program: Staged Development of Geologic Repositories for High-Level Waste

    International Nuclear Information System (INIS)

    Isaacs, T.

    2004-01-01

    Programs to manage and ultimately dispose of high-level radioactive wastes are unique from scientific and technological as well as socio-political aspects. From a scientific and technological perspective, high-level radioactive wastes remain potentially hazardous for geological time periods--many millennia--and scientific and technological programs must be put in place that result in a system that provides high confidence that the wastes will be isolated from the accessible environment for these many thousands of years. Of course, ''proof'' in the classical sense is not possible at the outset, since the performance of the system can only be known with assurance, if ever, after the waste has been emplaced for those geological time periods. Adding to this challenge, many uncertainties exist in both the natural and engineered systems that are intended to isolate the wastes, and some of the uncertainties will remain regardless of the time and expense in attempting to characterize the system and assess its performance

  12. How to Shape a Successful Repository Program: Staged Development of Geologic Repositories for High-Level Waste

    Energy Technology Data Exchange (ETDEWEB)

    Isaacs, T.

    2004-10-03

    Programs to manage and ultimately dispose of high-level radioactive wastes are unique from scientific and technological as well as socio-political aspects. From a scientific and technological perspective, high-level radioactive wastes remain potentially hazardous for geological time periods--many millennia--and scientific and technological programs must be put in place that result in a system that provides high confidence that the wastes will be isolated from the accessible environment for these many thousands of years. Of course, ''proof'' in the classical sense is not possible at the outset, since the performance of the system can only be known with assurance, if ever, after the waste has been emplaced for those geological time periods. Adding to this challenge, many uncertainties exist in both the natural and engineered systems that are intended to isolate the wastes, and some of the uncertainties will remain regardless of the time and expense in attempting to characterize the system and assess its performance.

  13. Owlready: Ontology-oriented programming in Python with automatic classification and high level constructs for biomedical ontologies.

    Science.gov (United States)

    Lamy, Jean-Baptiste

    2017-07-01

    Ontologies are widely used in the biomedical domain. While many tools exist for the edition, alignment or evaluation of ontologies, few solutions have been proposed for ontology programming interface, i.e. for accessing and modifying an ontology within a programming language. Existing query languages (such as SPARQL) and APIs (such as OWLAPI) are not as easy-to-use as object programming languages are. Moreover, they provide few solutions to difficulties encountered with biomedical ontologies. Our objective was to design a tool for accessing easily the entities of an OWL ontology, with high-level constructs helping with biomedical ontologies. From our experience on medical ontologies, we identified two difficulties: (1) many entities are represented by classes (rather than individuals), but the existing tools do not permit manipulating classes as easily as individuals, (2) ontologies rely on the open-world assumption, whereas the medical reasoning must consider only evidence-based medical knowledge as true. We designed a Python module for ontology-oriented programming. It allows access to the entities of an OWL ontology as if they were objects in the programming language. We propose a simple high-level syntax for managing classes and the associated "role-filler" constraints. We also propose an algorithm for performing local closed world reasoning in simple situations. We developed Owlready, a Python module for a high-level access to OWL ontologies. The paper describes the architecture and the syntax of the module version 2. It details how we integrated the OWL ontology model with the Python object model. The paper provides examples based on Gene Ontology (GO). We also demonstrate the interest of Owlready in a use case focused on the automatic comparison of the contraindications of several drugs. This use case illustrates the use of the specific syntax proposed for manipulating classes and for performing local closed world reasoning. Owlready has been successfully

  14. Six Sigma Evaluation of the High Level Waste Tank Farm Corrosion Control Program at the Savannah River Site

    International Nuclear Information System (INIS)

    Hill, P. J.

    2003-01-01

    Six Sigma is a disciplined approach to process improvement based on customer requirements and data. The goal is to develop or improve processes with defects that are measured at only a few parts per million. The process includes five phases: Identify, Measure, Analyze, Improve, and Control. This report describes the application of the Six Sigma process to improving the High Level Waste (HLW) Tank Farm Corrosion Control Program. The report documents the work performed and the tools utilized while applying the Six Sigma process from September 28, 2001 to April 1, 2002. During Fiscal Year 2001, the High Level Waste Division spent $5.9 million to analyze samples from the F and H Tank Farms. The largest portion of these analytical costs was $2.45 million that was spent to analyze samples taken to support the Corrosion Control Program. The objective of the Process Improvement Project (PIP) team was to reduce the number of analytical tasks required to support the Corrosion Control Program by 50 percent. Based on the data collected, the corrosion control decision process flowchart, and the use of the X-Y Matrix tool, the team determined that analyses in excess of the requirements of the corrosion control program were being performed. Only two of the seven analytical tasks currently performed are required for the 40 waste tanks governed by the Corrosion Control Program. Two additional analytical tasks are required for a small subset of the waste tanks resulting in an average of 2.7 tasks per sample compared to the current 7 tasks per sample. Forty HLW tanks are sampled periodically as part of the Corrosion Control Program. For each of these tanks, an analysis was performed to evaluate the stability of the chemistry in the tank and then to determine the statistical capability of the tank to meet minimum corrosion inhibitor limits. The analyses proved that most of the tanks were being sampled too frequently. Based on the results of these analyses and th e use of additional

  15. Program Transformation to Identify List-Based Parallel Skeletons

    Directory of Open Access Journals (Sweden)

    Venkatesh Kannan

    2016-07-01

    Full Text Available Algorithmic skeletons are used as building-blocks to ease the task of parallel programming by abstracting the details of parallel implementation from the developer. Most existing libraries provide implementations of skeletons that are defined over flat data types such as lists or arrays. However, skeleton-based parallel programming is still very challenging as it requires intricate analysis of the underlying algorithm and often uses inefficient intermediate data structures. Further, the algorithmic structure of a given program may not match those of list-based skeletons. In this paper, we present a method to automatically transform any given program to one that is defined over a list and is more likely to contain instances of list-based skeletons. This facilitates the parallel execution of a transformed program using existing implementations of list-based parallel skeletons. Further, by using an existing transformation called distillation in conjunction with our method, we produce transformed programs that contain fewer inefficient intermediate data structures.

  16. Development of massively parallel quantum chemistry program SMASH

    Energy Technology Data Exchange (ETDEWEB)

    Ishimura, Kazuya [Department of Theoretical and Computational Molecular Science, Institute for Molecular Science 38 Nishigo-Naka, Myodaiji, Okazaki, Aichi 444-8585 (Japan)

    2015-12-31

    A massively parallel program for quantum chemistry calculations SMASH was released under the Apache License 2.0 in September 2014. The SMASH program is written in the Fortran90/95 language with MPI and OpenMP standards for parallelization. Frequently used routines, such as one- and two-electron integral calculations, are modularized to make program developments simple. The speed-up of the B3LYP energy calculation for (C{sub 150}H{sub 30}){sub 2} with the cc-pVDZ basis set (4500 basis functions) was 50,499 on 98,304 cores of the K computer.

  17. Development of massively parallel quantum chemistry program SMASH

    International Nuclear Information System (INIS)

    Ishimura, Kazuya

    2015-01-01

    A massively parallel program for quantum chemistry calculations SMASH was released under the Apache License 2.0 in September 2014. The SMASH program is written in the Fortran90/95 language with MPI and OpenMP standards for parallelization. Frequently used routines, such as one- and two-electron integral calculations, are modularized to make program developments simple. The speed-up of the B3LYP energy calculation for (C 150 H 30 ) 2 with the cc-pVDZ basis set (4500 basis functions) was 50,499 on 98,304 cores of the K computer

  18. Parallelization for X-ray crystal structural analysis program

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hiroshi [Japan Atomic Energy Research Inst., Tokyo (Japan); Minami, Masayuki; Yamamoto, Akiji

    1997-10-01

    In this report we study vectorization and parallelization for X-ray crystal structural analysis program. The target machine is NEC SX-4 which is a distributed/shared memory type vector parallel supercomputer. X-ray crystal structural analysis is surveyed, and a new multi-dimensional discrete Fourier transform method is proposed. The new method is designed to have a very long vector length, so that it enables to obtain the 12.0 times higher performance result that the original code. Besides the above-mentioned vectorization, the parallelization by micro-task functions on SX-4 reaches 13.7 times acceleration in the part of multi-dimensional discrete Fourier transform with 14 CPUs, and 3.0 times acceleration in the whole program. Totally 35.9 times acceleration to the original 1CPU scalar version is achieved with vectorization and parallelization on SX-4. (author)

  19. Hanford Waste Vitrification Plant quality assurance program description for defense high-level waste form development and qualification

    International Nuclear Information System (INIS)

    Hand, R.L.

    1990-12-01

    The US Department of Energy-Office of Civilian Radioactive Waste Management has been designated the national high-level waste repository licensee and the recipient for the canistered waste forms. The Office of Waste Operations executes overall responsibility for producing the canistered waste form. The Hanford Waste Vitrification Plant Project, as part of the waste form producer organization, consists of a vertical relationship. Overall control is provided by the US Department of Energy-Environmental Restoration and Waste Management Headquarters; with the US Department of Energy-Office of Waste Operations; the US Department of Energy- Headquarters/Vitrification Project Branch; the US Department of Energy-Richland Operations Office/Vitrification Project Office; and the Westinghouse Hanford Company, operations and engineering contractor. This document has been prepared in response to direction from the US Department of Energy-Office of Civilian Radioactive Waste Management through the US Department of Energy-Richland Operations Office for a quality assurance program that meets the requirements of the US Department of Energy. This document provides guidance and direction for implementing a quality assurance program that applies to the Hanford Waste Vitrification Plant Project. The Hanford Waste Vitrification Plant Project management commits to implementing the quality assurance program activities; reviewing the program periodically, and revising it as necessary to keep it current and effective. 12 refs., 6 figs., 1 tab

  20. On program restructuring, scheduling, and communication for parallel processor systems

    Energy Technology Data Exchange (ETDEWEB)

    Polychronopoulos, Constantine D. [Univ. of Illinois, Urbana, IL (United States)

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, these algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.

  1. Report on the joint USA-Germany drop test program for a vitrified high level waste cask

    International Nuclear Information System (INIS)

    Golliher, K.G.; Witt, C.R.; Wieser, K.E.

    1993-01-01

    A series of full-scale drop tests was performed on a ductile iron transport cask in a cooperative program between the US Department of Energy (DOE) and Bundesantalt fuer Materialpruefung (BAM) in Germany. The tests, which were performed at BAM's test facility located near Lehre, Germany, were performed on a prototype cask designed for transport of Vitrified High Level Waste (VHLW) canisters. The VHLW cask is a right circular cylinder with a diameter of 1156 mm and a height of 3454 cm, and weighs approximately 24.6 kg including its payload of a single VHLW canister. The drop tests were performed with a non-radioactive, prototype VHLW canister in the cavity. (J.P.N.)

  2. First update to the US Nuclear Regulatory Commission's regulatory strategy for the high-level waste repository program

    International Nuclear Information System (INIS)

    Johnson, R.L.; Linehan, J.J.

    1991-01-01

    The US Nuclear Regulatory Commission (NRC) staff has updated its initial regulatory strategy for the High-Level Waste Repository Licensing Program. The update describes changes to the initial strategy and summarizes progress and future activities. This paper summarizes the first update of the regulatory strategy. In general the overall strategy of identifying and reducing uncertainties is unchanged. Identifying regulatory and institutional uncertainties is essentially complete, and therefore, the current and future emphasis is on reducing those regulatory and institutional uncertainties identified to date. The NRC staff has improved the methods of reducing regulatory uncertainties by (1) enhancing the technical basis preparation process for potential rulemakings and guidance and (2) designing a new guidance document, called a staff position, for clarifying regulatory uncertainties. For guiding the US DOE's reduction of technical uncertainties, the NRC staff will give more emphasis to prelicense application reviews and less emphasis on preparing staff technical positions

  3. Basic design of parallel computational program for probabilistic structural analysis

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Arai, Taketoshi; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for 'development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory' (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  4. Basic design of parallel computational program for probabilistic structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Yoshiyuki; Arai, Taketoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for `development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory` (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  5. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  6. A Programming Model for Massive Data Parallelism with Data Dependencies

    International Nuclear Information System (INIS)

    Cui, Xiaohui; Mueller, Frank; Potok, Thomas E.; Zhang, Yongpeng

    2009-01-01

    Accelerating processors can often be more cost and energy effective for a wide range of data-parallel computing problems than general-purpose processors. For graphics processor units (GPUs), this is particularly the case when program development is aided by environments such as NVIDIA s Compute Unified Device Architecture (CUDA), which dramatically reduces the gap between domain-specific architectures and general purpose programming. Nonetheless, general-purpose GPU (GPGPU) programming remains subject to several restrictions. Most significantly, the separation of host (CPU) and accelerator (GPU) address spaces requires explicit management of GPU memory resources, especially for massive data parallelism that well exceeds the memory capacity of GPUs. One solution to this problem is to transfer data between the GPU and host memories frequently. In this work, we investigate another approach. We run massively data-parallel applications on GPU clusters. We further propose a programming model for massive data parallelism with data dependencies for this scenario. Experience from micro benchmarks and real-world applications shows that our model provides not only ease of programming but also significant performance gains

  7. Technical position on items and activities in the high-level waste geologic repository program subject to quality assurance requirements

    International Nuclear Information System (INIS)

    Duncan, A.B.; Bilhorn, S.G.; Kennedy, J.E.

    1988-04-01

    This document provides guidance on how to identify items and activities subject to Quality Assurance in the high-level nuclear waste repository program for pre-closure and post-closure phases of the repository. In the pre-closure phase, structures, systems and components essential to the prevention or mitigation of an accident that could result in an off-site radiation dose of 0.5rem or greater are termed ''important to safety''. In the post-closure phase, the barriers which are relied on to meet the containment and isolation requirements are defined as ''important to waste isolation''. These structures, systems, components, and barriers, and the activities related to their characterization, design, construction, and operation are required to meet quality assurance (QA) criteria to provide confidence in the performance of the geologic repository. The list of structures, systems, and components important to safety and engineered barriers important to waste isolation is referred to as the ''Q-List'' and lies within the scope of the QA program. 10 refs

  8. Python based high-level synthesis compiler

    Science.gov (United States)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  9. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  10. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.; Khan, Ayaz H.

    2017-01-01

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it's time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  11. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  12. Testing New Programming Paradigms with NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage

  13. Scientific programming on massively parallel processor CP-PACS

    International Nuclear Information System (INIS)

    Boku, Taisuke

    1998-01-01

    The massively parallel processor CP-PACS takes various problems of calculation physics as the object, and it has been designed so that its architecture has been devised to do various numerical processings. In this report, the outline of the CP-PACS and the example of programming in the Kernel CG benchmark in NAS Parallel Benchmarks, version 1, are shown, and the pseudo vector processing mechanism and the parallel processing tuning of scientific and technical computation utilizing the three-dimensional hyper crossbar net, which are two great features of the architecture of the CP-PACS are described. As for the CP-PACS, the PUs based on RISC processor and added with pseudo vector processor are used. Pseudo vector processing is realized as the loop processing by scalar command. The features of the connection net of PUs are explained. The algorithm of the NPB version 1 Kernel CG is shown. The part that takes the time for processing most in the main loop is the product of matrix and vector (matvec), and the parallel processing of the matvec is explained. The time for the computation by the CPU is determined. As the evaluation of the performance, the evaluation of the time for execution, the short vector processing of pseudo vector processor based on slide window, and the comparison with other parallel computers are reported. (K.I.)

  14. U.S. Programs in the development of spent fuel and high-level waste disposal technology

    International Nuclear Information System (INIS)

    Rusche, B.C.

    1987-01-01

    U. S. Progess in the development of a national high-level radioactive waste disposal system is reported. The mutual benefits of international cooperation in developing the technology for radioactive management and disposal are also described. (Huang)

  15. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  16. Feedback Driven Annotation and Refactoring of Parallel Programs

    DEFF Research Database (Denmark)

    Larsen, Per

    and communication in embedded programs. Runtime checks are developed to ensure that annotations correctly describe observable program behavior. The performance impact of runtime checking is evaluated on several benchmark kernels and is negligible in all cases. The second aspect is compilation feedback. Annotations...... are not effective unless programmers are told how and when they are benecial. A prototype compilation feedback system was developed in collaboration with IBM Haifa Research Labs. It reports issues that prevent further analysis to the programmer. Performance evaluation shows that three programs performes signicantly......This thesis combines programmer knowledge and feedback to improve modeling and optimization of software. The research is motivated by two observations. First, there is a great need for automatic analysis of software for embedded systems - to expose and model parallelism inherent in programs. Second...

  17. Regulatory perspectives on model validation in high-level radioactive waste management programs: A joint NRC/SKI white paper

    Energy Technology Data Exchange (ETDEWEB)

    Wingefors, S.; Andersson, J.; Norrby, S. [Swedish Nuclear Power lnspectorate, Stockholm (Sweden). Office of Nuclear Waste Safety; Eisenberg, N.A.; Lee, M.P.; Federline, M.V. [U.S. Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Material Safety and Safeguards; Sagar, B.; Wittmeyer, G.W. [Center for Nuclear Waste Regulatory Analyses, San Antonio, TX (United States)

    1999-03-01

    Validation (or confidence building) should be an important aspect of the regulatory uses of mathematical models in the safety assessments of geologic repositories for the disposal of spent nuclear fuel and other high-level radioactive wastes (HLW). A substantial body of literature exists indicating the manner in which scientific validation of models is usually pursued. Because models for a geologic repository performance assessment cannot be tested over the spatial scales of interest and long time periods for which the models will make estimates of performance, the usual avenue for model validation- that is, comparison of model estimates with actual data at the space-time scales of interest- is precluded. Further complicating the model validation process in HLW programs are the uncertainties inherent in describing the geologic complexities of potential disposal sites, and their interactions with the engineered system, with a limited set of generally imprecise data, making it difficult to discriminate between model discrepancy and inadequacy of input data. A successful strategy for model validation, therefore, should attempt to recognize these difficulties, address their resolution, and document the resolution in a careful manner. The end result of validation efforts should be a documented enhancement of confidence in the model to an extent that the model's results can aid in regulatory decision-making. The level of validation needed should be determined by the intended uses of these models, rather than by the ideal of validation of a scientific theory. This white Paper presents a model validation strategy that can be implemented in a regulatory environment. It was prepared jointly by staff members of the U.S. Nuclear Regulatory Commission and the Swedish Nuclear Power Inspectorate-SKI. This document should not be viewed as, and is not intended to be formal guidance or as a staff position on this matter. Rather, based on a review of the literature and previous

  18. Regulatory perspectives on model validation in high-level radioactive waste management programs: A joint NRC/SKI white paper

    International Nuclear Information System (INIS)

    Wingefors, S.; Andersson, J.; Norrby, S.

    1999-03-01

    Validation (or confidence building) should be an important aspect of the regulatory uses of mathematical models in the safety assessments of geologic repositories for the disposal of spent nuclear fuel and other high-level radioactive wastes (HLW). A substantial body of literature exists indicating the manner in which scientific validation of models is usually pursued. Because models for a geologic repository performance assessment cannot be tested over the spatial scales of interest and long time periods for which the models will make estimates of performance, the usual avenue for model validation- that is, comparison of model estimates with actual data at the space-time scales of interest- is precluded. Further complicating the model validation process in HLW programs are the uncertainties inherent in describing the geologic complexities of potential disposal sites, and their interactions with the engineered system, with a limited set of generally imprecise data, making it difficult to discriminate between model discrepancy and inadequacy of input data. A successful strategy for model validation, therefore, should attempt to recognize these difficulties, address their resolution, and document the resolution in a careful manner. The end result of validation efforts should be a documented enhancement of confidence in the model to an extent that the model's results can aid in regulatory decision-making. The level of validation needed should be determined by the intended uses of these models, rather than by the ideal of validation of a scientific theory. This white Paper presents a model validation strategy that can be implemented in a regulatory environment. It was prepared jointly by staff members of the U.S. Nuclear Regulatory Commission and the Swedish Nuclear Power Inspectorate-SKI. This document should not be viewed as, and is not intended to be formal guidance or as a staff position on this matter. Rather, based on a review of the literature and previous

  19. A scalable parallel algorithm for multiple objective linear programs

    Science.gov (United States)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  20. MPI_XSTAR: MPI-based parallelization of XSTAR program

    Science.gov (United States)

    Danehkar, A.

    2017-12-01

    MPI_XSTAR parallelizes execution of multiple XSTAR runs using Message Passing Interface (MPI). XSTAR (ascl:9910.008), part of the HEASARC's HEAsoft (ascl:1408.004) package, calculates the physical conditions and emission spectra of ionized gases. MPI_XSTAR invokes XSTINITABLE from HEASoft to generate a job list of XSTAR commands for given physical parameters. The job list is used to make directories in ascending order, where each individual XSTAR is spawned on each processor and outputs are saved. HEASoft's XSTAR2TABLE program is invoked upon the contents of each directory in order to produce table model FITS files for spectroscopy analysis tools.

  1. Parallelization and checkpointing of GPU applications through program transformation

    Energy Technology Data Exchange (ETDEWEB)

    Solano-Quinde, Lizandro Damian [Iowa State Univ., Ames, IA (United States)

    2012-01-01

    GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have benefited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solve the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and

  2. The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition

    Directory of Open Access Journals (Sweden)

    Ashkan Tousimojarad

    2013-12-01

    Full Text Available We present the Glasgow Parallel Reduction Machine (GPRM, a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machine framework that enables the parallel task composition approach. We focus the discussion on GPIR, the functional language used as the intermediate representation of the bytecode running on the GPRM. Using examples in this language we show the flexibility and power of our task composition framework. We demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor and an AMD 48-core processor system. We also compare our framework with OpenMP tasks in a parallel pointer chasing algorithm running on the Tilera processor. Our results show that the GPRM programs outperform the corresponding OpenMP codes on all test platforms, and can greatly facilitate writing of parallel programs, in particular non-data parallel algorithms such as reductions.

  3. An object-oriented programming paradigm for parallelization of computational fluid dynamics

    International Nuclear Information System (INIS)

    Ohta, Takashi.

    1997-03-01

    We propose an object-oriented programming paradigm for parallelization of scientific computing programs, and show that the approach can be a very useful strategy. Generally, parallelization of scientific programs tends to be complicated and unportable due to the specific requirements of each parallel computer or compiler. In this paper, we show that the object-oriented programming design, which separates the parallel processing parts from the solver of the applications, can achieve the large improvement in the maintenance of the codes, as well as the high portability. We design the program for the two-dimensional Euler equations according to the paradigm, and evaluate the parallel performance on IBM SP2. (author)

  4. Overview of the US program for developing a waste disposal system for spent nuclear fuel and high-level waste

    International Nuclear Information System (INIS)

    Kay, C.E.

    1988-01-01

    Safe disposal of spent nuclear fuel and radioactive high-level waste (HLW) has been a matter of national concern ever since the first US civilian nuclear reactor began generating electricity in 1957. Based on current projections of commercial generating capacity, by the turn of the century, there will be >40,000 tonne of spent fuel in the Untied States. In addition to commercial spent fuel, defense HLW is generated in the United States and currently stored at three US Department of Energy (DOE) sites: The Nuclear Waste Policy Amendments Act of 1987 provided for financial incentives to host a repository or a monitored retrievable storage (MRS) facility; mandated the areas in which DOE's siting efforts should concentrate (Yucca Mountain, Nevada); required termination of site-specific activities at other sites; required a resisting process for an MRS facility, which DOE had proposed as an integral part of the waste disposal system; terminated all activities for identifying candidates for a second repository; established an 11-member Nuclear Waste Technical Review Board; established a three-member MRS commission to be appointed by heads of the US Senate and House; directed the President to appoint a negotiator to seek a state or Indian tribe willing to host a repository or MRS facility at a suitable site and to negotiate terms and conditions under which the state or tribe would be willing to host such a facility; and amended, adjusted, or established other requirements contained in the 1982 law

  5. Overview of the U.S. program for the management of spent nuclear fuel and high-level waste

    International Nuclear Information System (INIS)

    Kay, C.E.

    1991-01-01

    An important development in the waste-management program conducted by the U.S. Department of Energy (DOE) was the enactment of the Nuclear Waste Policy Amendments Act of 1987 (Amendments Act). The Amendments Act directs DOE to characterize only one site for the first repository; to develop only one repository at present; and to site, construct, and operate a facility for monitored retrievable storage (MRS), subject to certain conditions. Thus, the system authorized by the Congress consists of a geologic repository, an MRS facility, and a transportation system. Because Congress has streamlined the program by reducing options for the major elements of the system, the DOE will be able to concentrate on the technical activities needed for licensing and on developing an integrated system that is optimized for efficiency and manageability. Therefore, DOE is increasing emphasis on systems integration and on quality assurance. The focus of the program remains permanent disposal in a repository. (author) 4 refs

  6. Branch technical position on the use of expert elicitation in the high-level radioactive waste program

    International Nuclear Information System (INIS)

    Kotra, J.P.; Lee, M.P.; Eisenberg, N.A.; DeWispelare, A.R.

    1996-11-01

    Should the site be found suitable, DOE will apply to the US Nuclear Regulatory Commission for permission to construct and then operate a proposed geologic repository for the disposal of spent nuclear fuel and other high-level radioactive waste at Yucca Mountain. In deciding whether to grant or deny DOE's license application for a geologic repository, NRC will closely examine the facts and expert judgment set forth in any potential DOE license application. NRC expects that subjective judgments of individual experts and, in some cases, groups of experts, will be used by DOE to interpret data obtained during site characterization and to address the many technical issues and inherent uncertainties associated with predicting the performance of a repository system for thousands of years. NRC has traditionally accepted, for review, expert judgment to evaluate and interpret the factual bases of license applications and is expected to give appropriate consideration to the judgments of DOE's experts regarding the geologic repository. Such consideration, however, envisions DOE using expert judgments to complement and supplement other sources of scientific and technical information, such as data collection, analyses, and experimentation. In this document, the NRC staff has set forth technical positions that: (1) provide general guidelines on those circumstances that may warrant the use of a formal process for obtaining the judgments of more than one expert (i.e., expert elicitation); and (2) describe acceptable procedures for conducting expert elicitation when formally elicited judgments are used to support a demonstration of compliance with NRC's geologic disposal regulation, currently set forth in 10 CFR Part 60. 76 refs

  7. CAMAC and high-level-languages

    International Nuclear Information System (INIS)

    Degenhardt, K.H.

    1976-05-01

    A proposal for easy programming of CAMAC systems with high-level-languages (FORTRAN, RTL/2, etc.) and interpreters (BASIC, MUMTI, etc.) using a few subroutines and a LAM driver is presented. The subroutines and the LAM driver are implemented for PDP11/RSX-11M and for the CAMAC controllers DEC CA11A (branch controller), BORER type 1533A (single crate controller) and DEC CA11F (single crate controller). Mixed parallel/serial CAMAC systems employing KINETIC SYSTEMS serial driver mod. 3992 and serial crate controllers mod. 3950 are implemented for all mentioned parallel controllers, too. DMA transfers from or to CAMAC modules using non-processor-request controllers (BORER type 1542, DEC CA11FN) are available. (orig.) [de

  8. Product traceability and quality as applied to the united states transuranic and high-level waste repository programs

    International Nuclear Information System (INIS)

    Pickering, S.Y.; Orrell, S.A.

    2000-01-01

    As with any repository program, predictions of the performance of a site over very long time frames may often meet with skepticism from the public and decision-makers, such as regulatory and governmental agencies. Experience at the WIPP and the YMP indicate that demonstrating the defensibility of data, conceptual models, computer codes, and numerical analyses is critical. Five overarching principles have been found to be the basis of a technically and publicly acceptable repository. The principles are traceability, transparency, reproducibility, retrievability, and reviews. - Traceability allows one to understand the source and justification of data and other input that generate conclusions. - Transparency allows one to follow the logic, calculations, and other operations that produce results. - Reproducibility allows one to reconstruct the results without recourse to the originator of the information. - Retrievability allows one to retrieve documentation that demonstrate these overarching principles. - Reviews ensure that the work is technically acceptable, complete, and accurate. This paper discusses how these principles are applied to the WIPP and the YMP. By setting up quality assurance and management controls (e.g., procedures, audits, peer reviews) these principles are implemented. Without successfully applying these principles the WIPP would not have gone from research to industrial maturity. The YMP is ensuring that these are implemented for activities that support licensing. Any repository program concerned with demonstrating defensibility to the public and regulators would do well by incorporating traceability, transparency, reproducibility, retrievability, and reviews into their program. (authors)

  9. Massive parallel electromagnetic field simulation program JEMS-FDTD design and implementation on jasmin

    International Nuclear Information System (INIS)

    Li Hanyu; Zhou Haijing; Dong Zhiwei; Liao Cheng; Chang Lei; Cao Xiaolin; Xiao Li

    2010-01-01

    A large-scale parallel electromagnetic field simulation program JEMS-FDTD(J Electromagnetic Solver-Finite Difference Time Domain) is designed and implemented on JASMIN (J parallel Adaptive Structured Mesh applications INfrastructure). This program can simulate propagation, radiation, couple of electromagnetic field by solving Maxwell equations on structured mesh explicitly with FDTD method. JEMS-FDTD is able to simulate billion-mesh-scale problems on thousands of processors. In this article, the program is verified by simulating the radiation of an electric dipole. A beam waveguide is simulated to demonstrate the capability of large scale parallel computation. A parallel performance test indicates that a high parallel efficiency is obtained. (authors)

  10. NWTS program criteria for mined geologic disposal of nuclear waste: functional requirements and performance criteria for waste packages for solidified high-level waste and spent fuel

    International Nuclear Information System (INIS)

    1982-07-01

    The Department of Energy (DOE) has primary federal responsibility for the development and implementation of safe and environmentally acceptable nuclear waste disposal methods. Currently, the principal emphasis in the program is on emplacement of nuclear wastes in mined geologic repositories well beneath the earth's surface. A brief description of the mined geologic disposal system is provided. The National Waste Terminal Storage (NWTS) program was established under DOE's predecessor, the Energy Research and Development Administration, to provide facilities for the mined geologic disposal of radioactive wastes. The NWTS program includes both the development and the implementation of the technology necessary for designing, constructing, licensing, and operating repositories. The program does not include the management of processing radioactive wastes or of transporting the wastes to repositories. The NWTS-33 series, of which this document is a part, provides guidance for the NWTS program in the development and implementation of licensed mined geologic disposal systems for solidified high-level and transuranic (TRU) wastes. This document presents the functional requirements and performance criteria for waste packages for solidified high-level waste and spent fuel. A separate document to be developed, NWTS-33(4b), will present the requirements and criteria for waste packages for TRU wastes. The hierarchy and application of these requirements and criteria are discussed in Section 2.2

  11. Environmental program planning for the proposed high-level nuclear waste repository at Yucca Mountain, Nevada: Volume 1

    International Nuclear Information System (INIS)

    1987-08-01

    Environmental protection during the course of siting and constructing a repository is mandated by NWPA in conjunction with various phases of repository siting and development. However, DOE has issued no comprehensive, integrated plan for environmental protection. Consequently, it is unclear how DOE will accomplish environmental assessment, monitoring, impact mitigation, and site reclamation. DOE should, therefore, defer further implementation of its current characterization program until a comprehensive environmental protection plan is available. To fulfill its oversight responsibilities the State of Nevada has proposed a comprehensive environmental program for the Yucca Mountain site that includes immediately undertaking studies to establish a 12-month baseline of environmental information at the site; adopting the DOE Site Characterization Plan (SCP) and the engineering design plans it will contain as the basis for defining the impact potential of site characterization activities; using the environmental baseline and the SCP to evaluate the efficacy of the preliminary impact analyses reported by DOE in the EA; using the SCP as the basis for discussions with federal, state, and local regulatory authorities to decide which environmental requirements apply and how they can be complied with; using the SCP, the EA impact review, and the compliance requirements to determine the scope of reclamation measures needed; and developing environmental monitoring and impact mitigation plans based on the EA impact review, compliance requirements, and anticipated reclamation needs

  12. Methods for estimating costs of transporting spent fuel and defense high-level radioactive waste for the civilian radioactive waste management program

    International Nuclear Information System (INIS)

    Darrough, M.E.; Lilly, M.J.

    1989-01-01

    The US Department of Energy (DOE), through the Office of Civilian Radioactive Waste Management, is planning and developing a transportation program for the shipment of spent fuel and defense high-level waste from current storage locations to the site of the mined geologic repository. In addition to its responsibility for providing a safe transportation system, the DOE will assure that the transportation program will function with the other system components to create an integrated waste management system. In meeting these objectives, the DOE will use private industry to the maximum extent practicable and in a manner that is cost effective. This paper discusses various methodologies used for estimating costs for the national radioactive waste transportation system. Estimating these transportation costs is a complex effort, as the high-level radioactive waste transportation system, itself, will be complex. Spent fuel and high-level waste will be transported from more than 100 nuclear power plants and defense sites across the continental US, using multiple transport modes (truck, rail, and barge/rail) and varying sizes and types of casks. Advance notification to corridor states will be given and scheduling will need to be coordinated with utilities, carriers, state and local officials, and the DOE waste acceptance facilities. Additionally, the waste forms will vary in terms of reactor type, size, weight, age, radioactivity, and temperature

  13. A discussion about high-level radioactive waste disposal program. From the results of dialogue with citizens

    International Nuclear Information System (INIS)

    Kimura, Hiroshi; Furukawa, Masashi; Sugiyama, Daisuke; Chida, Taiji

    2008-01-01

    Implementation of HLW disposal is one of urgent issue, when we will continue the use of nuclear power. But, the citizens may not have the sufficient amount of information or knowledge about HLW disposal in order to make themselves decision to this issue. To know how the citizens understand about HLW disposal, we tried to talk about the HLW disposal with 11 citizen groups through the face-to-face dialogue. One group consists of 2-3 persons, and we had 3 times dialogue to one group. In this dialogue, the participants had a certain amount of knowledge about HLW disposal, and their opinions to the issue of HLW disposal program. These opinions include the doubt against open application system to select the siting area, the emotion like NIMBY, indication of lack of public relations about HLW disposal, and so on. (author)

  14. Exploiting variability for energy optimization of parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Lavrijsen, Wim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Iancu, Costin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); de Jong, Wibe [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Xin [Georgia Inst. of Technology, Atlanta, GA (United States); Schwan, Karsten [Georgia Inst. of Technology, Atlanta, GA (United States)

    2016-04-18

    Here in this paper we present optimizations that use DVFS mechanisms to reduce the total energy usage in scientific applications. Our main insight is that noise is intrinsic to large scale parallel executions and it appears whenever shared resources are contended. The presence of noise allows us to identify and manipulate any program regions amenable to DVFS. When compared to previous energy optimizations that make per core decisions using predictions of the running time, our scheme uses a qualitative approach to recognize the signature of executions amenable to DVFS. By recognizing the "shape of variability" we can optimize codes with highly dynamic behavior, which pose challenges to all existing DVFS techniques. We validate our approach using offline and online analyses for one-sided and two-sided communication paradigms. We have applied our methods to NWChem, and we show best case improvements in energy use of 12% at no loss in performance when using online optimizations running on 720 Haswell cores with one-sided communication. With NWChem on MPI two-sided and offline analysis, capturing the initialization, we find energy savings of up to 20%, with less than 1% performance cost.

  15. MulticoreBSP for C : A high-performance library for shared-memory parallel programming

    NARCIS (Netherlands)

    Yzelman, A. N.; Bisseling, R. H.; Roose, D.; Meerbergen, K.

    2014-01-01

    The bulk synchronous parallel (BSP) model, as well as parallel programming interfaces based on BSP, classically target distributed-memory parallel architectures. In earlier work, Yzelman and Bisseling designed a MulticoreBSP for Java library specifically for shared-memory architectures. In the

  16. Recommended safety, reliability, quality assurance and management aerospace techniques with possible application by the DOE to the high-level radioactive waste repository program

    International Nuclear Information System (INIS)

    Bland, W.M. Jr.

    1985-05-01

    Aerospace SRQA and management techniques, principally those developed and used by the NASA Lyndon B. Johnson Space Center on the manned space flight programs, have been assessed for possible application by the DOE and the DOE-contractors to the high level radioactive waste repository program that results from the implementation of the NWPA of 1982. Those techniques believed to have the greatest potential for usefulness to the DOE and the DOE-contractors have been discussed in detail and are recommended to the DOE for adoption; discussion is provided for the manner in which this transfer of technology can be implemented. Six SRQA techniques and two management techniques are recommended for adoption by the DOE; included with the management techniques is a recommendation for the DOE to include a licensing interface with the NRC in the application of the milestone reviews technique. Three other techniques are recommended for study by the DOE for possible adaptation to the DOE program

  17. Hanford Waste Vitrification Plant Quality Assurance Program description for high-level waste form development and qualification. Revision 3, Part 2

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    The Hanford Waste Vitrification Plant Project has been established to convert the high-level radioactive waste associated with nuclear defense production at the Hanford Site into a waste form suitable for disposal in a deep geologic repository. The Hanford Waste Vitrification Plant will mix processed radioactive waste with borosilicate material, then heat the mixture to its melting point (vitrification) to forin a glass-like substance that traps the radionuclides in the glass matrix upon cooling. The Hanford Waste Vitrification Plant Quality Assurance Program has been established to support the mission of the Hanford Waste Vitrification Plant. This Quality Assurance Program Description has been written to document the Hanford Waste Vitrification Plant Quality Assurance Program.

  18. Scoring methods and results for qualitative evaluation of public health impacts from the Hanford high-level waste tanks. Integrated Risk Assessment Program

    International Nuclear Information System (INIS)

    Buck, J.W.; Gelston, G.M.; Farris, W.T.

    1995-09-01

    The objective of this analysis is to qualitatively rank the Hanford Site high-level waste (HLW) tanks according to their potential public health impacts through various (groundwater, surface water, and atmospheric) exposure pathways. Data from all 149 single-shell tanks (SSTs) and 23 of the 28 double-shell tanks (DSTs) in the Tank Waste Remediation System (TWRS) Program were analyzed for chemical and radiological carcinogenic as well as chemical noncarcinogenic health impacts. The preliminary aggregate score (PAS) ranking system was used to generate information from various release scenarios. Results based on the PAS ranking values should be considered relative health impacts rather than absolute risk values

  19. Development of grouting technologies for geological disposal of high level waste in Japan (1). Overall program and application of developed technologies

    International Nuclear Information System (INIS)

    Fujita, Tomoo; Sasamoto, Hiroshi; Sugita, Yutaka; Matsui, Hiroya

    2013-01-01

    The Japan Atomic Energy Agency started a grout project for geological disposal of high-level radioactive waste (HLW) in 2007. The aim of the project was to develop new grouting technologies and grout materials and also to develop models for performance assessments, prediction of the long-term radionuclide migration and identify detrimental changes in the host rock by the grout material leachate. This study presents the overall program and the application of key engineering technologies to the construction and operation of an underground facility for the geological disposal of HLW, with particular emphasis on the long-term effects of grout materials. (author)

  20. The JAERI program for development of safety assessment models and acquisition of data needed for assessment of geological disposal of high-level radioactive wastes

    International Nuclear Information System (INIS)

    Matsuzuru, H.

    1991-01-01

    The JAERI is conducting R and D program for the development of safety assessment methodologies and the acquisition of data needed for the assessment of geologic disposal of high-level radioactive wastes, aiming at the elucidation of feasibility of geologic disposal in Japan. The paper describes current R and D activities to develop interim versions of both a deterministic and a probabilistic methodologies based on a normal evolution scenario, to collect data concerning engineered barriers and geologic media through field and laboratory experiments, and to validate the models used in the methodologies. 2 figs., 2 refs

  1. Portable programming on parallel/networked computers using the Application Portable Parallel Library (APPL)

    Science.gov (United States)

    Quealy, Angela; Cole, Gary L.; Blech, Richard A.

    1993-01-01

    The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.

  2. Comparison of national programs and regulations for the management of spent fuel and disposal of high-level waste in seven countries

    International Nuclear Information System (INIS)

    Numark, N.J.; Mattson, R.J.; Gaunt, J.

    1986-01-01

    This paper describes programs and regulatory requirements affecting the management of spent fuel and disposal of high-level radioactive waste in seven nations with large nuclear power programs. The comparison is intended to illustrate that the range of spent fuel management options is influenced by certain technical and political constraints. It begins by providing overall nuclear fuel cycle facts for each country, including nuclear generating capacities, rates of spent fuel discharge, and policies on spent fuel reprocessing. Spent fuel storage techniques and reprocessing activities are compared in light of constraints such as fuel type. Waste disposal investigations are described, including a summary of the status of regulatory developments affecting repository siting and disposal. A timeline is provided to illustrate the principle milestones in spent fuel management and waste disposal in each country. Finally, policies linking nuclear power licensing and development to nuclear waste management milestones and RandD progress are discussed

  3. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    Science.gov (United States)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  4. On the effective parallel programming of multi-core processors

    NARCIS (Netherlands)

    Varbanescu, A.L.

    2010-01-01

    Multi-core processors are considered now the only feasible alternative to the large single-core processors which have become limited by technological aspects such as power consumption and heat dissipation. However, due to their inherent parallel structure and their diversity, multi-cores are

  5. HPC parallel programming model for gyrokinetic MHD simulation

    International Nuclear Information System (INIS)

    Naitou, Hiroshi; Yamada, Yusuke; Tokuda, Shinji; Ishii, Yasutomo; Yagi, Masatoshi

    2011-01-01

    The 3-dimensional gyrokinetic PIC (particle-in-cell) code for MHD simulation, Gpic-MHD, was installed on SR16000 (“Plasma Simulator”), which is a scalar cluster system consisting of 8,192 logical cores. The Gpic-MHD code advances particle and field quantities in time. In order to distribute calculations over large number of logical cores, the total simulation domain in cylindrical geometry was broken up into N DD-r × N DD-z (number of radial decomposition times number of axial decomposition) small domains including approximately the same number of particles. The axial direction was uniformly decomposed, while the radial direction was non-uniformly decomposed. N RP replicas (copies) of each decomposed domain were used (“particle decomposition”). The hybrid parallelization model of multi-threads and multi-processes was employed: threads were parallelized by the auto-parallelization and N DD-r × N DD-z × N RP processes were parallelized by MPI (message-passing interface). The parallelization performance of Gpic-MHD was investigated for the medium size system of N r × N θ × N z = 1025 × 128 × 128 mesh with 4.196 or 8.192 billion particles. The highest speed for the fixed number of logical cores was obtained for two threads, the maximum number of N DD-z , and optimum combination of N DD-r and N RP . The observed optimum speeds demonstrated good scaling up to 8,192 logical cores. (author)

  6. Ageing management program for the Spanish low and intermediate level waste disposal and spent fuel and high-level waste centralised storage facilities

    Science.gov (United States)

    Zuloaga, P.; Ordoñez, M.; Andrade, C.; Castellote, M.

    2011-04-01

    The generic design of the centralised spent fuel storage facility was approved by the Spanish Safety Authority in 2006. The planned operational life is 60 years, while the design service life is 100 years. Durability studies and surveillance of the behaviour have been considered from the initial design steps, taking into account the accessibility limitations and temperatures involved. The paper presents an overview of the ageing management program set in support of the Performance Assessment and Safety Review of El Cabril low and intermediate level waste (LILW) disposal facility. Based on the experience gained for LILW, ENRESA has developed a preliminary definition of the Ageing Management Plan for the Centralised Interim Storage Facility of spent Fuel and High Level Waste (HLW), which addresses the behaviour of spent fuel, its retrievability, the confinement system and the reinforced concrete structure. It includes tests plans and surveillance design considerations, based on the El Cabril LILW disposal facility.

  7. Ageing management program for the Spanish low and intermediate level waste disposal and spent fuel and high-level waste centralised storage facilities

    Directory of Open Access Journals (Sweden)

    Andrade C.

    2011-04-01

    Full Text Available The generic design of the centralised spent fuel storage facility was approved by the Spanish Safety Authority in 2006. The planned operational life is 60 years, while the design service life is 100 years. Durability studies and surveillance of the behaviour have been considered from the initial design steps, taking into account the accessibility limitations and temperatures involved. The paper presents an overview of the ageing management program set in support of the Performance Assessment and Safety Review of El Cabril low and intermediate level waste (LILW disposal facility. Based on the experience gained for LILW, ENRESA has developed a preliminary definition of the Ageing Management Plan for the Centralised Interim Storage Facility of spent Fuel and High Level Waste (HLW, which addresses the behaviour of spent fuel, its retrievability, the confinement system and the reinforced concrete structure. It includes tests plans and surveillance design considerations, based on the El Cabril LILW disposal facility.

  8. Japan-Australia co-operative program on research and development of technology for the management of high level radioactive wastes. Final report 1985 to 1998

    Energy Technology Data Exchange (ETDEWEB)

    Hart, K.; Vance, E.; Lumpkin, G. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia); Mitamura, H.; Banba, T. [Japan Atomic Energy Research Inst. Tokai, Ibaraki (Japan)

    1998-12-01

    The overall aim of the Co-operative Program has been to promote the exchange of information on technology for the management of High-Level Wastes (HLW) and to encourage research and development relevant to such technology. During the 13 years that the Program has been carried out, HLW management strategies have matured and developed internationally, and Japan has commenced construction of a domestic reprocessing and vitrification facility for HLW. The HLW management strategy preferred is a national decision. Many countries are using vitrification, direct disposal of spent fuel or a combination of both to handle their existing wastes whereas others have deferred the decision. The work carried out in the Co-operative Program provides strong scientific evidence that the durability of ceramic waste forms is not significantly affected by radiation damage and that high loadings of actinide elements can be incorporated into specially designed ceramic waste forms. Moreover, natural minerals have been shown to remain as closed systems for U and Th for up to 2.5 b y. All of these results give confidence in the ability of second generation waste forms, such as Synroc, to handle future waste arisings that may not be suitable for vitrification 87 refs., 15 tabs., 22 figs.

  9. High-level verification

    CERN Document Server

    Lerner, Sorin; Kundu, Sudipta

    2011-01-01

    Given the growing size and heterogeneity of Systems on Chip (SOC), the design process from initial specification to chip fabrication has become increasingly complex. This growing complexity provides incentive for designers to use high-level languages such as C, SystemC, and SystemVerilog for system-level design. While a major goal of these high-level languages is to enable verification at a higher level of abstraction, allowing early exploration of system-level designs, the focus so far for validation purposes has been on traditional testing techniques such as random testing and scenario-based

  10. Communications oriented programming of parallel iterative solutions of sparse linear systems

    Science.gov (United States)

    Patrick, M. L.; Pratt, T. W.

    1986-01-01

    Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.

  11. Compiler and Runtime Support for Programming in Adaptive Parallel Environments

    Science.gov (United States)

    1998-10-15

    noother job is waiting for resources, and use a smaller number of processors when other jobs needresources. Setia et al. [15, 20] have shown that such...15] Vijay K. Naik, Sanjeev Setia , and Mark Squillante. Performance analysis of job scheduling policiesin parallel supercomputing environments. In...on networks ofheterogeneous workstations. Technical Report CSE-94-012, Oregon Graduate Institute of Scienceand Technology, 1994.[20] Sanjeev Setia

  12. User's guide of parallel program development environment (PPDE). The 2nd edition

    International Nuclear Information System (INIS)

    Ueno, Hirokazu; Takemiya, Hiroshi; Imamura, Toshiyuki; Koide, Hiroshi; Matsuda, Katsuyuki; Higuchi, Kenji; Hirayama, Toshio; Ohta, Hirofumi

    2000-03-01

    The STA basic system has been enhanced to accelerate support for parallel programming on heterogeneous parallel computers, through a series of R and D on the technology of parallel processing. The enhancement has been made through extending the function of the PPDF, Parallel Program Development Environment in the STA basic system. The extended PPDE has the function to make: 1) the automatic creation of a 'makefile' and a shell script file for its execution, 2) the multi-tools execution which makes the tools on heterogeneous computers to execute with one operation a task on a computer, and 3) the mirror composition to reflect editing results of a file on a computer into all related files on other computers. These additional functions will enhance the work efficiency for program development on some computers. More functions have been added to the PPDE to provide help for parallel program development. New functions were also designed to complement a HPF translator and a parallelizing support tool when working together so that a sequential program is efficiently converted to a parallel program. This report describes the use of extended PPDE. (author)

  13. Concurrent Collections (CnC): A new approach to parallel programming

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    A common approach in designing parallel languages is to provide some high level handles to manipulate the use of the parallel platform. This exposes some aspects of the target platform, for example, shared vs. distributed memory. It may expose some but not all types of parallelism, for example, data parallelism but not task parallelism. This approach must find a balance between the desire to provide a simple view for the domain expert and provide sufficient power for tuning. This is hard for any given architecture and harder if the language is to apply to a range of architectures. Either simplicity or power is lost. Instead of viewing the language design problem as one of providing the programmer with high level handles, we view the problem as one of designing an interface. On one side of this interface is the programmer (domain expert) who knows the application but needs no knowledge of any aspects of the platform. On the other side of the interface is the performance expert (programmer o...

  14. Protocol-Based Verification of Message-Passing Parallel Programs

    DEFF Research Database (Denmark)

    López-Acosta, Hugo-Andrés; Eduardo R. B. Marques, Eduardo R. B.; Martins, Francisco

    2015-01-01

    We present ParTypes, a type-based methodology for the verification of Message Passing Interface (MPI) programs written in the C programming language. The aim is to statically verify programs against protocol specifications, enforcing properties such as fidelity and absence of deadlocks. We develo...

  15. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  16. Feasibility studies for a high energy physics MC program on massive parallel platforms

    International Nuclear Information System (INIS)

    Bertolotto, L.M.; Peach, K.J.; Apostolakis, J.; Bruschini, C.E.; Calafiura, P.; Gagliardi, F.; Metcalf, M.; Norton, A.; Panzer-Steindel, B.

    1994-01-01

    The parallelization of a Monte Carlo program for the NA48 experiment is presented. As a first step, a task farming structure was realized. Based on this, a further step, making use of a distributed database for showers in the electro-magnetic calorimeter, was implemented. Further possibilities for using parallel processing for a quasi-real time calibration of the calorimeter are described

  17. Cell verification of parallel burnup calculation program MCBMPI based on MPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Wang Guanbo; Yang Xin; She Ding

    2014-01-01

    The parallel burnup calculation program MCBMPI was developed. The program was modularized. The parallel MCNP5 program MCNP5MPI was employed as neutron transport calculation module. And a composite of three solution methods was used to solve burnup equation, i.e. matrix exponential technique, TTA analytical solution, and Gauss Seidel iteration. MPI parallel zone decomposition strategy was concluded in the program. The program system only consists of MCNP5MPI and burnup subroutine. The latter achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, and data exchanging with MCNP5MPI. Also, the program was verified with the pressurized water reactor (PWR) cell burnup benchmark. The results show that it,s capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  18. 76 FR 62808 - Pilot Program for Parallel Review of Medical Products

    Science.gov (United States)

    2011-10-11

    ... voluntary participation in the pilot program, as well as the guiding principles the Agencies intend to... 57045), parallel review is intended to reduce the time between FDA marketing approval and CMS national...

  19. The technical feasibility of uranium enrichment for nuclear bomb construction at the parallel nuclear program plant

    International Nuclear Information System (INIS)

    Rosa, L.P.

    1990-01-01

    It is discussed the hole of the Parallel Nuclear Program is Brazil and the feasibility of uranium enrichment for nuclear bomb construction. This program involves two research centers, one belonging to the brazilian navy and another to the aeronautics. Some other brazilian institutes like CTA, IPEN, COPESP and CETEX and also taking part in the program. (A.C.A.S.)

  20. Development and benchmark verification of a parallelized Monte Carlo burnup calculation program MCBMPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Yang Xin; Wang Guanbo

    2014-01-01

    MCBMPI, a parallelized burnup calculation program, was developed. The program is modularized. Neutron transport calculation module employs the parallelized MCNP5 program MCNP5MPI, and burnup calculation module employs ORIGEN2, with the MPI parallel zone decomposition strategy. The program system only consists of MCNP5MPI and an interface subroutine. The interface subroutine achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, data exchanging with MCNP5MPI. Also, the program was verified with the Pressurized Water Reactor (PWR) cell burnup benchmark, the results showed that it's capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  1. Vdebug: debugging tool for parallel scientific programs. Design report on vdebug

    International Nuclear Information System (INIS)

    Matsuda, Katsuyuki; Takemiya, Hiroshi

    2000-02-01

    We report on a debugging tool called vdebug which supports debugging work for parallel scientific simulation programs. It is difficult to debug scientific programs with an existing debugger, because the volume of data generated by the programs is too large for users to check data in characters. Usually, the existing debugger shows data values in characters. To alleviate it, we have developed vdebug which enables to check the validity of large amounts of data by showing these data values visually. Although targets of vdebug have been restricted to sequential programs, we have made it applicable to parallel programs by realizing the function of merging and visualizing data distributed on programs on each computer node. Now, vdebug works on seven kinds of parallel computers. In this report, we describe the design of vdebug. (author)

  2. User's guide of parallel program development environment (PPDE). The 2nd edition

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Hirokazu; Takemiya, Hiroshi; Imamura, Toshiyuki; Koide, Hiroshi; Matsuda, Katsuyuki; Higuchi, Kenji; Hirayama, Toshio [Center for Promotion of Computational Science and Engineering, Japan Atomic Energy Research Institute, Tokyo (Japan); Ohta, Hirofumi [Hitachi Ltd., Tokyo (Japan)

    2000-03-01

    The STA basic system has been enhanced to accelerate support for parallel programming on heterogeneous parallel computers, through a series of R and D on the technology of parallel processing. The enhancement has been made through extending the function of the PPDF, Parallel Program Development Environment in the STA basic system. The extended PPDE has the function to make: 1) the automatic creation of a 'makefile' and a shell script file for its execution, 2) the multi-tools execution which makes the tools on heterogeneous computers to execute with one operation a task on a computer, and 3) the mirror composition to reflect editing results of a file on a computer into all related files on other computers. These additional functions will enhance the work efficiency for program development on some computers. More functions have been added to the PPDE to provide help for parallel program development. New functions were also designed to complement a HPF translator and a paralleilizing support tool when working together so that a sequential program is efficiently converted to a parallel program. This report describes the use of extended PPDE. (author)

  3. User's guide of parallel program development environment (PPDE). The 2nd edition

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Hirokazu; Takemiya, Hiroshi; Imamura, Toshiyuki; Koide, Hiroshi; Matsuda, Katsuyuki; Higuchi, Kenji; Hirayama, Toshio [Center for Promotion of Computational Science and Engineering, Japan Atomic Energy Research Institute, Tokyo (Japan); Ohta, Hirofumi [Hitachi Ltd., Tokyo (Japan)

    2000-03-01

    The STA basic system has been enhanced to accelerate support for parallel programming on heterogeneous parallel computers, through a series of R and D on the technology of parallel processing. The enhancement has been made through extending the function of the PPDF, Parallel Program Development Environment in the STA basic system. The extended PPDE has the function to make: 1) the automatic creation of a 'makefile' and a shell script file for its execution, 2) the multi-tools execution which makes the tools on heterogeneous computers to execute with one operation a task on a computer, and 3) the mirror composition to reflect editing results of a file on a computer into all related files on other computers. These additional functions will enhance the work efficiency for program development on some computers. More functions have been added to the PPDE to provide help for parallel program development. New functions were also designed to complement a HPF translator and a paralleilizing support tool when working together so that a sequential program is efficiently converted to a parallel program. This report describes the use of extended PPDE. (author)

  4. ALICE High Level Trigger

    CERN Multimedia

    Alt, T

    2013-01-01

    The ALICE High Level Trigger (HLT) is a computing farm designed and build for the real-time, online processing of the raw data produced by the ALICE detectors. Events are fully reconstructed from the raw data, analyzed and compressed. The analysis summary together with the compressed data and a trigger decision is sent to the DAQ. In addition the reconstruction of the events allows for on-line monitoring of physical observables and this information is provided to the Data Quality Monitor (DQM). The HLT can process event rates of up to 2 kHz for proton-proton and 200 Hz for Pb-Pb central collisions.

  5. It's not too late for the harpy eagle (Harpia harpyja: high levels of genetic diversity and differentiation can fuel conservation programs.

    Directory of Open Access Journals (Sweden)

    Heather R L Lerner

    2009-10-01

    Full Text Available The harpy eagle (Harpia harpyja is the largest Neotropical bird of prey and is threatened by human persecution and habitat loss and fragmentation. Current conservation strategies include local education, captive rearing and reintroduction, and protection or creation of trans-national habitat blocks and corridors. Baseline genetic data prior to reintroduction of captive-bred stock is essential for guiding such efforts but has not been gathered previously.We assessed levels of genetic diversity, population structure and demographic history for harpy eagles using samples collected throughout a large portion of their geographic distribution in Central America (n = 32 and South America (n = 31. Based on 417 bp of mitochondrial control region sequence data, relatively high levels of haplotype and nucleotide diversity were estimated for both Central and South America, although haplotype diversity was significantly higher for South America. Historical restriction of gene flow across the Andes (i.e. between our Central and South American subgroups is supported by coalescent analyses, the haplotype network and significant F(ST values, however reciprocally monophyletic lineages do not correspond to geographical locations in maximum likelihood analyses. A sudden population expansion for South America is indicated by a mismatch distribution analysis, and further supported by significant (p<0.05 negative values of Fu and Li's D(F and F, and Fu's F(S. This expansion, estimated at approximately 60 000 years BP (99 000-36 000 years BP 95% CI, encompasses a transition from a warm and dry time period prior to 50 000 years BP to an interval of maximum precipitation (50 000-36 000 years BP. Notably, this time period precedes the climatic and habitat changes associated with the last glacial maximum. In contrast, a multimodal distribution of haplotypes was observed for Central America suggesting either population equilibrium or a recent decline.High levels of

  6. Process-Oriented Parallel Programming with an Application to Data-Intensive Computing

    OpenAIRE

    Givelberg, Edward

    2014-01-01

    We introduce process-oriented programming as a natural extension of object-oriented programming for parallel computing. It is based on the observation that every class of an object-oriented language can be instantiated as a process, accessible via a remote pointer. The introduction of process pointers requires no syntax extension, identifies processes with programming objects, and enables processes to exchange information simply by executing remote methods. Process-oriented programming is a h...

  7. Programming a massively parallel, computation universal system: static behavior

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Farber, R.

    1986-01-01

    In previous work by the authors, the ''optimum finding'' properties of Hopfield neural nets were applied to the nets themselves to create a ''neural compiler.'' This was done in such a way that the problem of programming the attractors of one neural net (called the Slave net) was expressed as an optimization problem that was in turn solved by a second neural net (the Master net). In this series of papers that approach is extended to programming nets that contain interneurons (sometimes called ''hidden neurons''), and thus deals with nets capable of universal computation. 22 refs.

  8. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    Science.gov (United States)

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  9. COMPSs-Mobile: parallel programming for mobile-cloud computing

    OpenAIRE

    Lordan Gomis, Francesc-Josep; Badia Sala, Rosa Maria

    2016-01-01

    The advent of Cloud and the popularization of mobile devices have led us to a shift in computing access. Computing users will have an interaction display while the real computation will be performed remotely, in the Cloud. COMPSs-Mobile is a framework that aims to ease the development of energy-efficient and high-performing applications for this environment. The framework provides an infrastructure-unaware programming model that allows developers to code regular Android applications that, ...

  10. Compiling the parallel programming language NestStep to the CELL processor

    OpenAIRE

    Holm, Magnus

    2010-01-01

    The goal of this project is to create a source-to-source compiler which will translate NestStep code to C code. The compiler's job is to replace NestStep constructs with a series of function calls to the NestStep runtime system. NestStep is a parallel programming language extension based on the BSP model. It adds constructs for parallel programming on top of an imperative programming language. For this project, only constructs extending the C language are relevant. The output code will compil...

  11. High level nuclear wastes

    International Nuclear Information System (INIS)

    Lopez Perez, B.

    1987-01-01

    The transformations involved in the nuclear fuels during the burn-up at the power nuclear reactors for burn-up levels of 33.000 MWd/th are considered. Graphs and data on the radioactivity variation with the cooling time and heat power of the irradiated fuel are presented. Likewise, the cycle of the fuel in light water reactors is presented and the alternatives for the nuclear waste management are discussed. A brief description of the management of the spent fuel as a high level nuclear waste is shown, explaining the reprocessing and giving data about the fission products and their radioactivities, which must be considered on the vitrification processes. On the final storage of the nuclear waste into depth geological burials, both alternatives are coincident. The countries supporting the reprocessing are indicated and the Spanish programm defined in the Plan Energetico Nacional (PEN) is shortly reviewed. (author) 8 figs., 4 tabs

  12. 76 FR 66309 - Pilot Program for Parallel Review of Medical Products; Correction

    Science.gov (United States)

    2011-10-26

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS-3180-N2] Food and Drug Administration [Docket No. FDA-2010-N-0308] Pilot Program for Parallel Review of Medical... 11, 2011 (76 FR 62808). The document announced a pilot program for sponsors of innovative device...

  13. A Tool for Performance Modeling of Parallel Programs

    Directory of Open Access Journals (Sweden)

    J.A. González

    2003-01-01

    Full Text Available Current performance prediction analytical models try to characterize the performance behavior of actual machines through a small set of parameters. In practice, substantial deviations are observed. These differences are due to factors as memory hierarchies or network latency. A natural approach is to associate a different proportionality constant with each basic block, and analogously, to associate different latencies and bandwidths with each "communication block". Unfortunately, to use this approach implies that the evaluation of parameters must be done for each algorithm. This is a heavy task, implying experiment design, timing, statistics, pattern recognition and multi-parameter fitting algorithms. Software support is required. We present a compiler that takes as source a C program annotated with complexity formulas and produces as output an instrumented code. The trace files obtained from the execution of the resulting code are analyzed with an interactive interpreter, giving us, among other information, the values of those parameters.

  14. A multithreaded parallel implementation of a dynamic programming algorithm for sequence comparison.

    Science.gov (United States)

    Martins, W S; Del Cuvillo, J B; Useche, F J; Theobald, K B; Gao, G R

    2001-01-01

    This paper discusses the issues involved in implementing a dynamic programming algorithm for biological sequence comparison on a general-purpose parallel computing platform based on a fine-grain event-driven multithreaded program execution model. Fine-grain multithreading permits efficient parallelism exploitation in this application both by taking advantage of asynchronous point-to-point synchronizations and communication with low overheads and by effectively tolerating latency through the overlapping of computation and communication. We have implemented our scheme on EARTH, a fine-grain event-driven multithreaded execution and architecture model which has been ported to a number of parallel machines with off-the-shelf processors. Our experimental results show that the dynamic programming algorithm can be efficiently implemented on EARTH systems with high performance (e.g., speedup of 90 on 120 nodes), good programmability and reasonable cost.

  15. High performance parallelism pearls 2 multicore and many-core programming approaches

    CERN Document Server

    Jeffers, Jim

    2015-01-01

    High Performance Parallelism Pearls Volume 2 offers another set of examples that demonstrate how to leverage parallelism. Similar to Volume 1, the techniques included here explain how to use processors and coprocessors with the same programming - illustrating the most effective ways to combine Xeon Phi coprocessors with Xeon and other multicore processors. The book includes examples of successful programming efforts, drawn from across industries and domains such as biomed, genetics, finance, manufacturing, imaging, and more. Each chapter in this edited work includes detailed explanations of t

  16. Resolutions of the Coulomb operator: VIII. Parallel implementation using the modern programming language X10.

    Science.gov (United States)

    Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P

    2014-10-30

    Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.

  17. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  18. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    Science.gov (United States)

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  19. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    Science.gov (United States)

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  20. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  1. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  2. High Level of Integration in Integrated Disease Management Leads to Higher Usage in the e-Vita Study: Self-Management of Chronic Obstructive Pulmonary Disease With Web-Based Platforms in a Parallel Cohort Design.

    Science.gov (United States)

    Talboom-Kamp, Esther Pwa; Verdijk, Noortje A; Kasteleyn, Marise J; Harmans, Lara M; Talboom, Irvin Jsh; Numans, Mattijs E; Chavannes, Niels H

    2017-05-31

    Worldwide, nearly 3 million people die of chronic obstructive pulmonary disease (COPD) every year. Integrated disease management (IDM) improves disease-specific quality of life and exercise capacity for people with COPD, but can also reduce hospital admissions and hospital days. Self-management of COPD through eHealth interventions has shown to be an effective method to improve the quality and efficiency of IDM in several settings, but it remains unknown which factors influence usage of eHealth and change in behavior of patients. Our study, e-Vita COPD, compares different levels of integration of Web-based self-management platforms in IDM in three primary care settings. The main aim of this study is to analyze the factors that successfully promote the use of a self-management platform for COPD patients. The e-Vita COPD study compares three different approaches to incorporating eHealth via Web-based self-management platforms into IDM of COPD using a parallel cohort design. Three groups integrated the platforms to different levels. In groups 1 (high integration) and 2 (medium integration), randomization was performed to two levels of personal assistance for patients (high and low assistance); in group 3 there was no integration into disease management (none integration). Every visit to the e-Vita and Zorgdraad COPD Web platforms was tracked objectively by collecting log data (sessions and services). At the first log-in, patients completed a baseline questionnaire. Baseline characteristics were automatically extracted from the log files including age, gender, education level, scores on the Clinical COPD Questionnaire (CCQ), dyspnea scale (MRC), and quality of life questionnaire (EQ5D). To predict the use of the platforms, multiple linear regression analyses for the different independent variables were performed: integration in IDM (high, medium, none), personal assistance for the participants (high vs low), educational level, and self-efficacy level (General Self

  3. Generalized Analytical Program of Thyristor Phase Control Circuit with Series and Parallel Resonance Load

    OpenAIRE

    Nakanishi, Sen-ichiro; Ishida, Hideaki; Himei, Toyoji

    1981-01-01

    The systematic analytical method is reqUired for the ac phase control circuit by means of an inverse parallel thyristor pair which has a series and parallel L-C resonant load, because the phase control action causes abnormal and interesting phenomena, such as an extreme increase of voltage and current, an unique increase and decrease of contained higher harmonics, and a wide variation of power factor, etc. In this paper, the program for the analysis of the thyristor phase control circuit with...

  4. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  5. Academic training: From Evolution Theory to Parallel and Distributed Genetic Programming

    CERN Multimedia

    2007-01-01

    2006-2007 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 15, 16 March From 11:00 to 12:00 - Main Auditorium, bldg. 500 From Evolution Theory to Parallel and Distributed Genetic Programming F. FERNANDEZ DE VEGA / Univ. of Extremadura, SP Lecture No. 1: From Evolution Theory to Evolutionary Computation Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture No. 2: Parallel and Distributed Genetic Programming The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an ...

  6. Algorithmic differentiation of pragma-defined parallel regions differentiating computer programs containing OpenMP

    CERN Document Server

    Förster, Michael

    2014-01-01

    Numerical programs often use parallel programming techniques such as OpenMP to compute the program's output values as efficient as possible. In addition, derivative values of these output values with respect to certain input values play a crucial role. To achieve code that computes not only the output values simultaneously but also the derivative values, this work introduces several source-to-source transformation rules. These rules are based on a technique called algorithmic differentiation. The main focus of this work lies on the important reverse mode of algorithmic differentiation. The inh

  7. Run-Time and Compiler Support for Programming in Adaptive Parallel Environments

    Directory of Open Access Journals (Sweden)

    Guy Edjlali

    1997-01-01

    Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.

  8. P3T+: A Performance Estimator for Distributed and Parallel Programs

    Directory of Open Access Journals (Sweden)

    T. Fahringer

    2000-01-01

    Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.

  9. Towards Interactive Visual Exploration of Parallel Programs using a Domain-Specific Language

    KAUST Repository

    Klein, Tobias

    2016-04-19

    The use of GPUs and the massively parallel computing paradigm have become wide-spread. We describe a framework for the interactive visualization and visual analysis of the run-time behavior of massively parallel programs, especially OpenCL kernels. This facilitates understanding a program\\'s function and structure, finding the causes of possible slowdowns, locating program bugs, and interactively exploring and visually comparing different code variants in order to improve performance and correctness. Our approach enables very specific, user-centered analysis, both in terms of the recording of the run-time behavior and the visualization itself. Instead of having to manually write instrumented code to record data, simple code annotations tell the source-to-source compiler which code instrumentation to generate automatically. The visualization part of our framework then enables the interactive analysis of kernel run-time behavior in a way that can be very specific to a particular problem or optimization goal, such as analyzing the causes of memory bank conflicts or understanding an entire parallel algorithm.

  10. Methodologies and Tools for Tuning Parallel Programs: 80% Art, 20% Science, and 10% Luck

    Science.gov (United States)

    Yan, Jerry C.; Bailey, David (Technical Monitor)

    1996-01-01

    The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessors. However, without effective means to monitor (and analyze) program execution, tuning the performance of parallel programs becomes exponentially difficult as program complexity and machine size increase. In the past few years, the ubiquitous introduction of performance tuning tools from various supercomputer vendors (Intel's ParAide, TMC's PRISM, CRI's Apprentice, and Convex's CXtrace) seems to indicate the maturity of performance instrumentation/monitor/tuning technologies and vendors'/customers' recognition of their importance. However, a few important questions remain: What kind of performance bottlenecks can these tools detect (or correct)? How time consuming is the performance tuning process? What are some important technical issues that remain to be tackled in this area? This workshop reviews the fundamental concepts involved in analyzing and improving the performance of parallel and heterogeneous message-passing programs. Several alternative strategies will be contrasted, and for each we will describe how currently available tuning tools (e.g. AIMS, ParAide, PRISM, Apprentice, CXtrace, ATExpert, Pablo, IPS-2) can be used to facilitate the process. We will characterize the effectiveness of the tools and methodologies based on actual user experiences at NASA Ames Research Center. Finally, we will discuss their limitations and outline recent approaches taken by vendors and the research community to address them.

  11. Strategy and programs of the research on high-level and long-living radioactive wastes (by right of article L542 of the environment code from December 30, 1991 law). Conjuncture document

    International Nuclear Information System (INIS)

    2003-01-01

    This document gives, in a first part, a brief insight on the main results of the researches carried out in 2002 on the management of high-level and long-living radioactive wastes according to the three research ways defined by the December 30, 1991 law: separation-transmutation, disposal in deep geologic underground, wastes conditioning and storage. The second part of the document is an executive summary of the 2003 edition of the document 'strategy and programs of the researches on the management of high-level and long-living radioactive wastes - SPR 2003'. It presents the content of the different chapters: 1 - researches methodology and implementation of coordinated studies; 2 - researches status 10 years after the enforcement of the law; 3 - the main goals and steps to reach before the 2006 date-line; 4 - the description and analysis of the programs under consideration; 5 - the coordination between French programs; 6 - the international cooperation. (J.S.)

  12. Enabling Requirements-Based Programming for Highly-Dependable Complex Parallel and Distributed Systems

    Science.gov (United States)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.

  13. An approach to multicore parallelism using functional programming: A case study based on Presburger Arithmetic

    DEFF Research Database (Denmark)

    Dung, Phan Anh; Hansen, Michael Reichhardt

    2015-01-01

    In this paper we investigate multicore parallelism in the context of functional programming by means of two quantifier-elimination procedures for Presburger Arithmetic: one is based on Cooper’s algorithm and the other is based on the Omega Test. We first develop correct-by-construction prototype...... platform executing on an 8-core machine. A speedup of approximately 4 was obtained for Cooper’s algorithm and a speedup of approximately 6 was obtained for the exact-shadow part of the Omega Test. The considered procedures are complex, memory-intense algorithms on huge formula trees and the case study...... reveals more general applicable techniques and guideline for deriving parallel algorithms from sequential ones in the context of data-intensive tree algorithms. The obtained insights should apply for any strict and impure functional programming language. Furthermore, the results obtained for the exact...

  14. Dynamic programming in parallel boundary detection with application to ultrasound intima-media segmentation.

    Science.gov (United States)

    Zhou, Yuan; Cheng, Xinyao; Xu, Xiangyang; Song, Enmin

    2013-12-01

    Segmentation of carotid artery intima-media in longitudinal ultrasound images for measuring its thickness to predict cardiovascular diseases can be simplified as detecting two nearly parallel boundaries within a certain distance range, when plaque with irregular shapes is not considered. In this paper, we improve the implementation of two dynamic programming (DP) based approaches to parallel boundary detection, dual dynamic programming (DDP) and piecewise linear dual dynamic programming (PL-DDP). Then, a novel DP based approach, dual line detection (DLD), which translates the original 2-D curve position to a 4-D parameter space representing two line segments in a local image segment, is proposed to solve the problem while maintaining efficiency and rotation invariance. To apply the DLD to ultrasound intima-media segmentation, it is imbedded in a framework that employs an edge map obtained from multiplication of the responses of two edge detectors with different scales and a coupled snake model that simultaneously deforms the two contours for maintaining parallelism. The experimental results on synthetic images and carotid arteries of clinical ultrasound images indicate improved performance of the proposed DLD compared to DDP and PL-DDP, with respect to accuracy and efficiency. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Towards Interactive Visual Exploration of Parallel Programs using a Domain-Specific Language

    KAUST Repository

    Klein, Tobias; Bruckner, Stefan; Grö ller, M. Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    The use of GPUs and the massively parallel computing paradigm have become wide-spread. We describe a framework for the interactive visualization and visual analysis of the run-time behavior of massively parallel programs, especially OpenCL kernels. This facilitates understanding a program's function and structure, finding the causes of possible slowdowns, locating program bugs, and interactively exploring and visually comparing different code variants in order to improve performance and correctness. Our approach enables very specific, user-centered analysis, both in terms of the recording of the run-time behavior and the visualization itself. Instead of having to manually write instrumented code to record data, simple code annotations tell the source-to-source compiler which code instrumentation to generate automatically. The visualization part of our framework then enables the interactive analysis of kernel run-time behavior in a way that can be very specific to a particular problem or optimization goal, such as analyzing the causes of memory bank conflicts or understanding an entire parallel algorithm.

  16. A backtracking algorithm for the stream AND-parallel execution of logic programs

    Energy Technology Data Exchange (ETDEWEB)

    Somogyi, Z.; Ramamohanarao, K.; Vaghani, J. (Univ. of Melbourne, Parkville (Australia))

    1988-06-01

    The authors present the first backtracking algorithm for stream AND-parallel logic programs. It relies on compile-time knowledge of the data flow graph of each clause to let it figure out efficiently which goals to kill or restart when a goal fails. This crucial information, which they derive from mode declarations, was not available at compile-time in any previous stream AND-parallel system. They show that modes can increase the precision of the backtracking algorithm, though their algorithm allows this precision to be traded off against overhead on a procedure-by-procedure and call-by-call basis. The modes also allow their algorithm to handle efficiently programs that manipulate partially instantiated data structures and an important class of programs with circular dependency graphs. On code that does not need backtracking, the efficiency of their algorithm approaches that of the committed-choice languages; on code that does need backtracking its overhead is comparable to that of the independent AND-parallel backtracking algorithms.

  17. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    Science.gov (United States)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  18. The FORCE: A portable parallel programming language supporting computational structural mechanics

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Brehm, Juergen; Ramanan, Aruna

    1989-01-01

    This project supports the conversion of codes in Computational Structural Mechanics (CSM) to a parallel form which will efficiently exploit the computational power available from multiprocessors. The work is a part of a comprehensive, FORTRAN-based system to form a basis for a parallel version of the NICE/SPAR combination which will form the CSM Testbed. The software is macro-based and rests on the force methodology developed by the principal investigator in connection with an early scientific multiprocessor. Machine independence is an important characteristic of the system so that retargeting it to the Flex/32, or any other multiprocessor on which NICE/SPAR might be imnplemented, is well supported. The principal investigator has experience in producing parallel software for both full and sparse systems of linear equations using the force macros. Other researchers have used the Force in finite element programs. It has been possible to rapidly develop software which performs at maximum efficiency on a multiprocessor. The inherent machine independence of the system also means that the parallelization will not be limited to a specific multiprocessor.

  19. Perspective of the waste management research program at NRC on modeling phenomena related to the disposal of high-level radioactive waste

    International Nuclear Information System (INIS)

    Randall, J.D.; Costanzi, F.A.

    1985-01-01

    Modeling the geologic disposal of high-level radioactive waste falls short of ideal for a variety of reasons. The understanding of the physical processes involved may be incomplete or incorrect. It may not be possible to specify mathematically all relationships among the processes involved. The initial conditions or boundary conditions may not be known or directly measurable. Further, often it is impossible to obtain exact solutions to the mathematical relationships that constitute the mathematical model. Finally, many simplifications, approximations, and assumptions will be needed to make the models both understandable and calculationally tractable. Yet, modeling is the only means available by which any quantitative estimation of the expected performance of a geologic repository over the long term can be made. If modeling estimates of the performance of a geologic repository are to provide effective support for an NRC finding of reasonable assurance of no unreasonable risk to the public health and safety, then the strengths and limitations of the modeling process, the models themselves, and the use of the models must be understood and explored fully

  20. Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

  1. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  2. High-level-waste immobilization

    International Nuclear Information System (INIS)

    Crandall, J.L.

    1982-01-01

    Analysis of risks, environmental effects, process feasibility, and costs for disposal of immobilized high-level wastes in geologic repositories indicates that the disposal system safety has a low sensitivity to the choice of the waste disposal form

  3. Review and critique of the US Department of Energy environmental program plan for site characterization for a high-level waste repository at Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-31

    This report provides a review and critique of the US Department of Energy (DOE) environmental program plan for site characterization activities at Yucca Mountain which principally addresses compliance with federal and state environmental regulation and to a lesser extent monitoring and mitigation of significant adverse impacts and reclamation of disturbed areas. There are 15 documents which comprise the plan and focus on complying with the environmental requirements of the Nuclear Waste Policy Act, as amended, (NWPA) and with single-media environmental statutes and their regulations. All elements of the plan follow from the 1986 statutory environmental assessment (EA) required by NWPA which concluded that no significant adverse impacts would result from characterization of the Yucca Mountain site. The lack of appropriate environmental planning and review for site characterization at Yucca Mountain points to the need for an oversight function by the State of Nevada. It cannot be assumed that on its own DOE will properly comply with environmental requirements, especially the substantive requirements that comprise the intent of NEPA. Thus, procedures must be established to assure that the environmental interests of the State are addressed in the course of the Yucca Mountain Project. Accordingly, steps will be taken by the State of Nevada to review the soundness and efficacy of the DOE field surveys, monitoring and mitigation activities, reclamation actions, and ecological impact studies that follow from the DOE environmental program plans addressed by this review.

  4. Review and critique of the US Department of Energy environmental program plan for site characterization for a high-level waste repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    1992-01-01

    This report provides a review and critique of the US Department of Energy (DOE) environmental program plan for site characterization activities at Yucca Mountain which principally addresses compliance with federal and state environmental regulation and to a lesser extent monitoring and mitigation of significant adverse impacts and reclamation of disturbed areas. There are 15 documents which comprise the plan and focus on complying with the environmental requirements of the Nuclear Waste Policy Act, as amended, (NWPA) and with single-media environmental statutes and their regulations. All elements of the plan follow from the 1986 statutory environmental assessment (EA) required by NWPA which concluded that no significant adverse impacts would result from characterization of the Yucca Mountain site. The lack of appropriate environmental planning and review for site characterization at Yucca Mountain points to the need for an oversight function by the State of Nevada. It cannot be assumed that on its own DOE will properly comply with environmental requirements, especially the substantive requirements that comprise the intent of NEPA. Thus, procedures must be established to assure that the environmental interests of the State are addressed in the course of the Yucca Mountain Project. Accordingly, steps will be taken by the State of Nevada to review the soundness and efficacy of the DOE field surveys, monitoring and mitigation activities, reclamation actions, and ecological impact studies that follow from the DOE environmental program plans addressed by this review

  5. Fast implementations of 3D PET reconstruction using vector and parallel programming techniques

    International Nuclear Information System (INIS)

    Guerrero, T.M.; Cherry, S.R.; Dahlbom, M.; Ricci, A.R.; Hoffman, E.J.

    1993-01-01

    Computationally intensive techniques that offer potential clinical use have arisen in nuclear medicine. Examples include iterative reconstruction, 3D PET data acquisition and reconstruction, and 3D image volume manipulation including image registration. One obstacle in achieving clinical acceptance of these techniques is the computational time required. This study focuses on methods to reduce the computation time for 3D PET reconstruction through the use of fast computer hardware, vector and parallel programming techniques, and algorithm optimization. The strengths and weaknesses of i860 microprocessor based workstation accelerator boards are investigated in implementations of 3D PET reconstruction

  6. A program system for ab initio MO calculations on vector and parallel processing machines. Pt. 1

    International Nuclear Information System (INIS)

    Ernenwein, R.; Rohmer, M.M.; Benard, M.

    1990-01-01

    We present a program system for ab initio molecular orbital calculations on vector and parallel computers. The present article is devoted to the computation of one- and two-electron integrals over contracted Gaussian basis sets involving s-, p-, d- and f-type functions. The McMurchie and Davidson (MMD) algorithm has been implemented and parallelized by distributing over a limited number of logical tasks the calculation of the 55 relevant classes of integrals. All sections of the MMD algorithm have been efficiently vectorized, leading to a scalar/vector ratio of 5.8. Different algorithms are proposed and compared for an optimal vectorization of the contraction of the 'intermediate integrals' generated by the MMD formalism. Advantage is taken of the dynamic storage allocation for tuning the length of the vector loops (i.e. the size of the vectorization buffer) as a function of (i) the total memory available for the job, (ii) the number of logical tasks defined by the user (≤13), and (iii) the storage requested by each specific class of integrals. Test calculations carried out on a CRAY-2 computer show that the average number of finite integrals computed over a (s, p, d, f) CGTO basis set is about 1180000 per second and per processor. The combination of vectorization and parallelism on this 4-processor machine reduces the CPU time by a factor larger than 20 with respect to the scalar and sequential performance. (orig.)

  7. The ARES High-level Intermediate Representation

    Energy Technology Data Exchange (ETDEWEB)

    Moss, Nicholas David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The LLVM intermediate representation (IR) lacks semantic constructs for depicting common high-performance operations such as parallel and concurrent execution, communication and synchronization. Currently, representing such semantics in LLVM requires either extending the intermediate form (a signi cant undertaking) or the use of ad hoc indirect means such as encoding them as intrinsics and/or the use of metadata constructs. In this paper we discuss a work in progress to explore the design and implementation of a new compilation stage and associated high-level intermediate form that is placed between the abstract syntax tree and when it is lowered to LLVM's IR. This highlevel representation is a superset of LLVM IR and supports the direct representation of these common parallel computing constructs along with the infrastructure for supporting analysis and transformation passes on this representation.

  8. A program system for ab initio MO calculations on vector and parallel processing machines. Pt. 3

    International Nuclear Information System (INIS)

    Wiest, R.; Demuynck, J.; Benard, M.; Rohmer, M.M.; Ernenwein, R.

    1991-01-01

    This series of three papers presents a program system for ab initio molecular orbital calculations on vector and parallel computers. Part III is devoted to the four-index transformation on a molecular orbital basis of size NMO of the file of two-electorn integrals (pqparallelrs) generated by a contracted Gaussian set of size NATO (number of atomic orbitals). A fast Yoshimine algorithm first sorts the (pqparallelrs) integrals with respect to index pq only. This file of half-sorted integrals labelled by their rs-index can be processed without further modification to generate either the transformed integrals or the supermatrix elements. The large memory available on the CRAY-2 hase made possible to implement the transformation algorithm proposed by Bender in 1972, which requires a core-storage allocation varying as (NATO) 3 . Two versions of Bender's algorithm are included in the present program. The first version is an in-core version, where the complete file of accumulated contributions to transformed integrals in stored and updated in central memory. This version has been parallelized by distributing over a limited number of logical tasks the NATO steps corresponding to the scanning of the most external loop. The second version is an out-of-core version, in which twin files are alternatively used as input and output for the accumulated contributions to transformed integrals. This version is not parallel. The choice of one or another version and (for version 1) the determination of the number of tasks depends upon the balance between the available and the requested amounts of storage. The storage management and the choice of the proper version are carried out automatically using dynamic storage allocation. Both versions are vectorized and take advantage of the molecular symmetry. (orig.)

  9. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    Science.gov (United States)

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.

  10. High Level Radioactive Waste Management

    International Nuclear Information System (INIS)

    1991-01-01

    The proceedings of the second annual international conference on High Level Radioactive Waste Management, held on April 28--May 3, 1991, Las Vegas, Nevada, provides information on the current technical issue related to international high level radioactive waste management activities and how they relate to society as a whole. Besides discussing such technical topics as the best form of the waste, the integrity of storage containers, design and construction of a repository, the broader social aspects of these issues are explored in papers on such subjects as conformance to regulations, transportation safety, and public education. By providing this wider perspective of high level radioactive waste management, it becomes apparent that the various disciplines involved in this field are interrelated and that they should work to integrate their waste management activities. Individual records are processed separately for the data bases

  11. High-level Petri Nets

    DEFF Research Database (Denmark)

    various journals and collections. As a result, much of this knowledge is not readily available to people who may be interested in using high-level nets. Within the Petri net community this problem has been discussed many times, and as an outcome this book has been compiled. The book contains reprints...... of some of the most important papers on the application and theory of high-level Petri nets. In this way it makes the relevant literature more available. It is our hope that the book will be a useful source of information and that, e.g., it can be used in the organization of Petri net courses. To make......High-level Petri nets are now widely used in both theoretical analysis and practical modelling of concurrent systems. The main reason for the success of this class of net models is that they make it possible to obtain much more succinct and manageable descriptions than can be obtained by means...

  12. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets

    Science.gov (United States)

    Shrimankar, D. D.; Sathe, S. R.

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868

  13. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets.

    Science.gov (United States)

    Shrimankar, D D; Sathe, S R

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.

  14. High-Level Radioactive Waste.

    Science.gov (United States)

    Hayden, Howard C.

    1995-01-01

    Presents a method to calculate the amount of high-level radioactive waste by taking into consideration the following factors: the fission process that yields the waste, identification of the waste, the energy required to run a 1-GWe plant for one year, and the uranium mass required to produce that energy. Briefly discusses waste disposal and…

  15. High-level radioactive wastes

    International Nuclear Information System (INIS)

    Grissom, M.C.

    1982-10-01

    This bibliography contains 812 citations on high-level radioactive wastes included in the Department of Energy's Energy Data Base from January 1981 through July 1982. These citations are to research reports, journal articles, books, patents, theses, and conference papers from worldwide sources. Five indexes are provided: Corporate Author, Personal Author, Subject, Contract Number, and Report Number

  16. PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules

    Science.gov (United States)

    Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.

    2018-03-01

    The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.

  17. What is "the patient perspective" in patient engagement programs? Implicit logics and parallels to feminist theories.

    Science.gov (United States)

    Rowland, Paula; McMillan, Sarah; McGillicuddy, Patti; Richards, Joy

    2017-01-01

    Public and patient involvement (PPI) in health care may refer to many different processes, ranging from participating in decision-making about one's own care to participating in health services research, health policy development, or organizational reforms. Across these many forms of public and patient involvement, the conceptual and theoretical underpinnings remain poorly articulated. Instead, most public and patient involvement programs rely on policy initiatives as their conceptual frameworks. This lack of conceptual clarity participates in dilemmas of program design, implementation, and evaluation. This study contributes to the development of theoretical understandings of public and patient involvement. In particular, we focus on the deployment of patient engagement programs within health service organizations. To develop a deeper understanding of the conceptual underpinnings of these programs, we examined the concept of "the patient perspective" as used by patient engagement practitioners and participants. Specifically, we focused on the way this phrase was used in the singular: "the" patient perspective or "the" patient voice. From qualitative analysis of interviews with 20 patient advisers and 6 staff members within a large urban health network in Canada, we argue that "the patient perspective" is referred to as a particular kind of situated knowledge, specifically an embodied knowledge of vulnerability. We draw parallels between this logic of patient perspective and the logic of early feminist theory, including the concepts of standpoint theory and strong objectivity. We suggest that champions of patient engagement may learn much from the way feminist theorists have constructed their arguments and addressed critique.

  18. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    Directory of Open Access Journals (Sweden)

    Matthew O'keefe

    1995-01-01

    Full Text Available Massively parallel processors (MPPs hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. We have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.

  19. Drainage network extraction from a high-resolution DEM using parallel programming in the .NET Framework

    Science.gov (United States)

    Du, Chao; Ye, Aizhong; Gan, Yanjun; You, Jinjun; Duan, Qinyun; Ma, Feng; Hou, Jingwen

    2017-12-01

    High-resolution Digital Elevation Models (DEMs) can be used to extract high-accuracy prerequisite drainage networks. A higher resolution represents a larger number of grids. With an increase in the number of grids, the flow direction determination will require substantial computer resources and computing time. Parallel computing is a feasible method with which to resolve this problem. In this paper, we proposed a parallel programming method within the .NET Framework with a C# Compiler in a Windows environment. The basin is divided into sub-basins, and subsequently the different sub-basins operate on multiple threads concurrently to calculate flow directions. The method was applied to calculate the flow direction of the Yellow River basin from 3 arc-second resolution SRTM DEM. Drainage networks were extracted and compared with HydroSHEDS river network to assess their accuracy. The results demonstrate that this method can calculate the flow direction from high-resolution DEMs efficiently and extract high-precision continuous drainage networks.

  20. Enhancing Application Performance Using Mini-Apps: Comparison of Hybrid Parallel Programming Paradigms

    Science.gov (United States)

    Lawson, Gary; Sosonkina, Masha; Baurle, Robert; Hammond, Dana

    2017-01-01

    In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 11 was measured for MPI+OpenMP.

  1. Mobile and replicated alignment of arrays in data-parallel programs

    Science.gov (United States)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert

    1993-01-01

    When a data-parallel language like FORTRAN 90 is compiled for a distributed-memory machine, aggregate data objects (such as arrays) are distributed across the processor memories. The mapping determines the amount of residual communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: first, an alignment that maps all the objects to an abstract template, and then a distribution that maps the template to the processors. We solve two facets of the problem of finding alignments that reduce residual communication: we determine alignments that vary in loops, and objects that should have replicated alignments. We show that loop-dependent mobile alignment is sometimes necessary for optimum performance, and we provide algorithms with which a compiler can determine good mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself (via spread operations) or can be used to improve performance. We propose an algorithm based on network flow that determines which objects to replicate so as to minimize the total amount of broadcast communication in replication. This work on mobile and replicated alignment extends our earlier work on determining static alignment.

  2. Parallel Conjugate Gradient: Effects of Ordering Strategies, Programming Paradigms, and Architectural Platforms

    Science.gov (United States)

    Oliker, Leonid; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.

  3. Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

    Directory of Open Access Journals (Sweden)

    Lixiong Xu

    2017-01-01

    Full Text Available As one of the most effective function mining algorithms, Gene Expression Programming (GEP algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.

  4. Progress and future direction for the interim safe storage and disposal of Hanford high level waste (HLW)

    International Nuclear Information System (INIS)

    Wodrich, D.D.

    1996-01-01

    This paper describes the progress made at the largest environmental cleanup program in the United States. Substantial advances in methods to start interim safe storage of Hanford Site high-level wastes, waste characterization to support both safety- and disposal-related information needs, and proceeding with cost-effective disposal by the US DOE and its Hanford Site contractors, have been realized. Challenges facing the Tank Waste Remediation System Program, which is charged with the dual and parallel missions of interim safe storage and disposal of the high-level tank waste stored at the Hanford Site, are described

  5. Decommissioning high-level waste surface facilities

    International Nuclear Information System (INIS)

    1978-04-01

    The protective storage, entombment and dismantlement options of decommissioning a High-Level Waste Surface Facility (HLWSF) was investigated. A reference conceptual design for the facility was developed based on the designs of similar facilities. State-of-the-art decommissioning technologies were identified. Program plans and cost estimates for decommissioning the reference conceptual designs were developed. Good engineering design concepts were on the basis of this work identified

  6. A middle evaluation report on R and D subjects in 2000 fiscal year. Evaluation subject; 'draft of total program on ground disposal research on the high level radioactive wastes'

    International Nuclear Information System (INIS)

    2000-11-01

    The Japan Nuclear Cycle Development Institute (JNC) consulted the titled middle evaluation to the Subject Evaluation Committee (SEC) according to the Schematic indication on practice procedure of evaluation common to the generalized national R and D' and so on. By receiving the consult, SEC on wastes treatment and disposal carried out evaluation of this subject on basis of documents proposed from JNC and discussions at SEC according to an evaluation procedure determined by SEC. The program is to contribute to actualization of safe disposal of the high level radioactive wastes at a technical side. In order to promote the program to practice of the disposal without delay and to transfer its next stage smoothly, it is essential for JNC to prepare an R and D plan after the second summary. From these, this program is precise and adequate in its aim and meaning, high in its importance and agreeable to policy of government and needs of society. Here was summarized by the evaluation results with documents proposed by JNC. As a result of the evaluation, it was shown that as general directionality of this program was judged to be valid, its contents has some places without matching or always clearness with schedule of government and executing groups on practice of disposal. And, it was also shown that some points required for notice on promotion of this program. (G.K.)

  7. Mathematical Methods and Algorithms of Mobile Parallel Computing on the Base of Multi-core Processors

    Directory of Open Access Journals (Sweden)

    Alexander B. Bakulev

    2012-11-01

    Full Text Available This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.

  8. Distributed Memory Programming on Many-Cores

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Lobachev, Oleg

    2009-01-01

    Eden is a parallel extension of the lazy functional language Haskell providing dynamic process creation and automatic data exchange. As a Haskell extension, Eden takes a high-level approach to parallel programming and thereby simplifies parallel program development. The current implementation is ...

  9. The siting record: An account of the programs of federal agencies and events that have led to the selection of a potential site for a geologic respository for high-level radioactive waste

    Energy Technology Data Exchange (ETDEWEB)

    Lomenick, T.F.

    1996-03-01

    This record of siting a geologic repository for high-level radioactive wastes (HLW) and spent fuel describes the many investigations that culminated on December 22, 1987 in the designation of Yucca Mountain (YM), as the site to undergo detailed geologic characterization. It recounts the important issues and events that have been instrumental in shaping the course of siting over the last three and one half decades. In this long task, which was initiated in 1954, more than 60 regions, areas, or sites involving nine different rock types have been investigated. This effort became sharply focused in 1983 with the identification of nine potentially suitable sites for the first repository. From these nine sites, five were subsequently nominated by the U.S. Department of Energy (DOE) as suitable for characterization and then, in 1986, as required by the Nuclear Waste Policy Act of 1982 (NWPA), three of these five were recommended to the President as candidates for site characterization. President Reagan approved the recommendation on May 28, 1986. DOE was preparing site characterization plans for the three candidate sites, namely Deaf Smith County, Texas; Hanford Site, Washington; and YM. As a consequence of the 1987 Amendment to the NWPA, only the latter was authorized to undergo detailed characterization. A final Site Characterization Plan for Yucca Mountain was published in 1988. Prior to 1954, there was no program for the siting of disposal facilities for high-level waste (HLW). In the 1940s and 1950s, the volume of waste, which was small and which resulted entirely from military weapons and research programs, was stored as a liquid in large steel tanks buried at geographically remote government installations principally in Washington and Tennessee.

  10. The siting record: An account of the programs of federal agencies and events that have led to the selection of a potential site for a geologic respository for high-level radioactive waste

    International Nuclear Information System (INIS)

    Lomenick, T.F.

    1996-03-01

    This record of siting a geologic repository for high-level radioactive wastes (HLW) and spent fuel describes the many investigations that culminated on December 22, 1987 in the designation of Yucca Mountain (YM), as the site to undergo detailed geologic characterization. It recounts the important issues and events that have been instrumental in shaping the course of siting over the last three and one half decades. In this long task, which was initiated in 1954, more than 60 regions, areas, or sites involving nine different rock types have been investigated. This effort became sharply focused in 1983 with the identification of nine potentially suitable sites for the first repository. From these nine sites, five were subsequently nominated by the U.S. Department of Energy (DOE) as suitable for characterization and then, in 1986, as required by the Nuclear Waste Policy Act of 1982 (NWPA), three of these five were recommended to the President as candidates for site characterization. President Reagan approved the recommendation on May 28, 1986. DOE was preparing site characterization plans for the three candidate sites, namely Deaf Smith County, Texas; Hanford Site, Washington; and YM. As a consequence of the 1987 Amendment to the NWPA, only the latter was authorized to undergo detailed characterization. A final Site Characterization Plan for Yucca Mountain was published in 1988. Prior to 1954, there was no program for the siting of disposal facilities for high-level waste (HLW). In the 1940s and 1950s, the volume of waste, which was small and which resulted entirely from military weapons and research programs, was stored as a liquid in large steel tanks buried at geographically remote government installations principally in Washington and Tennessee

  11. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    Science.gov (United States)

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  12. Optimizing High Level Waste Disposal

    International Nuclear Information System (INIS)

    Dirk Gombert

    2005-01-01

    If society is ever to reap the potential benefits of nuclear energy, technologists must close the fuel-cycle completely. A closed cycle equates to a continued supply of fuel and safe reactors, but also reliable and comprehensive closure of waste issues. High level waste (HLW) disposal in borosilicate glass (BSG) is based on 1970s era evaluations. This host matrix is very adaptable to sequestering a wide variety of radionuclides found in raffinates from spent fuel reprocessing. However, it is now known that the current system is far from optimal for disposal of the diverse HLW streams, and proven alternatives are available to reduce costs by billions of dollars. The basis for HLW disposal should be reassessed to consider extensive waste form and process technology research and development efforts, which have been conducted by the United States Department of Energy (USDOE), international agencies and the private sector. Matching the waste form to the waste chemistry and using currently available technology could increase the waste content in waste forms to 50% or more and double processing rates. Optimization of the HLW disposal system would accelerate HLW disposition and increase repository capacity. This does not necessarily require developing new waste forms, the emphasis should be on qualifying existing matrices to demonstrate protection equal to or better than the baseline glass performance. Also, this proposed effort does not necessarily require developing new technology concepts. The emphasis is on demonstrating existing technology that is clearly better (reliability, productivity, cost) than current technology, and justifying its use in future facilities or retrofitted facilities. Higher waste processing and disposal efficiency can be realized by performing the engineering analyses and trade-studies necessary to select the most efficient methods for processing the full spectrum of wastes across the nuclear complex. This paper will describe technologies being

  13. From functional programming to multicore parallelism: A case study based on Presburger Arithmetic

    DEFF Research Database (Denmark)

    Dung, Phan Anh; Hansen, Michael Reichhardt

    2011-01-01

    , we are interested in using PA in connection with the Duration Calculus Model Checker (DCMC) [5]. There are effective decision procedures for PA including Cooper’s algorithm and the Omega Test; however, their complexity is extremely high with doubly exponential lower bound and triply exponential upper...... bound [7]. We investigate these decision procedures in the context of multicore parallelism with the hope of exploiting multicore powers. Unfortunately, we are not aware of any prior parallelism research related to decision procedures for PA. The closest work is the preliminary results on parallelism...

  14. Economics of defense high-level waste management in the United States

    International Nuclear Information System (INIS)

    Slate, S.C.; McDonell, W.R.

    1987-01-01

    The Department of Energy (DOE) is responsible for managing defense high-level wastes (DHLW) from U.S. defense activities using environmentally safe and cost-effective methods. In parallel with its technical programs, the DOE is performing economic studies to ensure that costs are minimized. To illustrate the cost estimating techniques and to provide a sense of cost magnitude, the DHLW costs for the Savannah River Plant (SRP) are calculated. Since operations at SRP must be optimized within relatively fixed management practices, the estimation of incremental costs is emphasized. Treatment and disposal costs are shown to equally contribute to the incremental cost of almost $400,000/canister

  15. Development of whole core thermal-hydraulic analysis program ACT. 4. Simplified fuel assembly model and parallelization by MPI

    International Nuclear Information System (INIS)

    Ohshima, Hiroyuki

    2001-10-01

    A whole core thermal-hydraulic analysis program ACT is being developed for the purpose of evaluating detailed in-core thermal hydraulic phenomena of fast reactors including the effect of the flow between wrapper-tube walls (inter-wrapper flow) under various reactor operation conditions. As appropriate boundary conditions in addition to a detailed modeling of the core are essential for accurate simulations of in-core thermal hydraulics, ACT consists of not only fuel assembly and inter-wrapper flow analysis modules but also a heat transport system analysis module that gives response of the plant dynamics to the core model. This report describes incorporation of a simplified model to the fuel assembly analysis module and program parallelization by a message passing method toward large-scale simulations. ACT has a fuel assembly analysis module which can simulate a whole fuel pin bundle in each fuel assembly of the core and, however, it may take much CPU time for a large-scale core simulation. Therefore, a simplified fuel assembly model that is thermal-hydraulically equivalent to the detailed one has been incorporated in order to save the simulation time and resources. This simplified model is applied to several parts of fuel assemblies in a core where the detailed simulation results are not required. With regard to the program parallelization, the calculation load and the data flow of ACT were analyzed and the optimum parallelization has been done including the improvement of the numerical simulation algorithm of ACT. Message Passing Interface (MPI) is applied to data communication between processes and synchronization in parallel calculations. Parallelized ACT was verified through a comparison simulation with the original one. In addition to the above works, input manuals of the core analysis module and the heat transport system analysis module have been prepared. (author)

  16. High-level waste processing and disposal

    International Nuclear Information System (INIS)

    Crandall, J.L.; Krause, H.; Sombret, C.; Uematsu, K.

    1984-11-01

    Without reprocessing, spent LWR fuel itself is generally considered an acceptable waste form. With reprocessing, borosilicate glass canisters, have now gained general acceptance for waste immobilization. The current first choice for disposal is emplacement in an engineered structure in a mined cavern at a depth of 500-1000 meters. A variety of rock types are being investigated including basalt, clay, granite, salt, shale, and volcanic tuff. This paper gives specific coverage to the national high level waste disposal plans for France, the Federal Republic of Germany, Japan and the United States. The French nuclear program assumes prompt reprocessing of its spent fuels, and France has already constructed the AVM. Two larger borosilicate glass plants are planned for a new French reprocessing plant at La Hague. France plans to hold the glass canisters in near-surface storage for a forty to sixty year cooling period and then to place them into a mined repository. The FRG and Japan also plan reprocessing for their LWR fuels. Both are currently having some fuel reprocessed by France, but both are also planning reprocessing plants which will include waste vitrification facilities. West Germany is now constructing the PAMELA Plant at Mol, Belgium to vitrify high level reprocessing wastes at the shutdown Eurochemic Plant. Japan is now operating a vitrification mockup test facility and plans a pilot plant facility at the Tokai reprocessing plant by 1990. Both countries have active geologic repository programs. The United State program assumes little LWR fuel reprocessing and is thus primarily aimed at direct disposal of spent fuel into mined repositories. However, the US have two borosilicate glass plants under construction to vitrify existing reprocessing wastes

  17. Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.

    Science.gov (United States)

    Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John

    2016-01-01

    Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

  18. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    Science.gov (United States)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  19. Removing high-level contaminants

    International Nuclear Information System (INIS)

    Wallace, Paula

    2013-01-01

    Full text: Using biomimicry, an Australian cleantech innovation making inroads intoChinas's industrial sector offers multiple benefits to miners and processors in Australia. Stephen Shelley, the executive chairman of Creative Water Technology (CWT), was on hand at a recent trade show to explain how his Melbourne company has developed world-class techniques in zero liquid discharge and fractional crystallization of minerals to apply to a wide range of water treatment and recycling applications. “Most existing technologies operate with high energy distillation, filters or biological processing. CWT's appliance uses a low temperature, thermal distillation process known as adiabatic recovery to desalinate, dewater and/or recycle highly saline and highly contaminated waste water,” said Shelley. The technology has been specifically designed to handle the high levels of contaminant that alternative technologies struggle to process, with proven water quality results for feed water samples with TDS levels over 300,000ppm converted to clean water with less than 20ppm. Comparatively, reverse osmosis struggles to process contaminant levels over 70,000ppm effectively. “CWT is able to reclaim up to 97% clean usable water and up to 100% of the contaminants contained in the feed water,” said Shelley, adding that soluble and insoluble contaminants are separately extracted and dried for sale or re-use. In industrial applications CWT has successfully processed feed water with contaminant levels over 650,000 mg/1- without the use of chemicals. “The technology would be suitable for companies in oil exploration and production, mining, smelting, biofuels, textiles and the agricultural and food production sectors,” said Shelley. When compared to a conventional desalination plant, the CWT system is able to capture the value in the brine that most plants discard, not only from the salt but the additional water it contains. “If you recover those two commodities... then you

  20. Parallel Object Oriented MD Simulation Program for Long Time Simulations of Metallic Glasses and Undercooled Liquids

    Science.gov (United States)

    Böddeker, B.; Teichler, H.

    The MD simulation program TABB is motivated by the need of long time simulations for the investigation of slow processes near the glass transition of glass forming alloys. TABB is written in C++ with a high degree of flexibility: TABB allows the use of any short ranged pair potentials or EAM potentials, by generating and using a spline representation of all functions and their derivatives. TABB supports several numerical integration algorithms like the Runge-Kotta or the modified Gear-predictor-corrector algorithm of order five. The boundary conditions can be chosen to resemble the geometry of bulk materials or films. The simulation box length or the pressure can be fixed for each dimension separately. TABB may be used in isokinetic, isoenergeric or canonic (with random forces) mode. TABB contains a simple instruction interpreter to easily control the parameters and options during the simulation. The same source code can be compiled either for workstations or for parallel computers. The main optimization goal of TABB is to allow long time simulations of medium or small sized systems. To make this possible, much attention is spent on the optimized communication between the nodes. TABB uses a domain decomposition procedure. To use many nodes with a small system, the domain size has to be small compared to the range of particle interactions. In the limit of many nodes for only few atoms, the bottle neck of communication is the latency time. TABB minimizes the number of pairs of domains containing atoms that interact between these domains. This procedure minimizes the need of communication calls between pairs of nodes. TABB decides automatically, to how many, and to which directions the decomposition shall be applied. E.g., in the case of one dimensional domain decomposition, the simulation box is only split into "slabs" along a selected direction. The three dimensional domain decomposition is best with respect to the number of interacting domains only for simulations

  1. High-level trigger system for the LHC ALICE experiment

    CERN Document Server

    Bramm, R; Lien, J A; Lindenstruth, V; Loizides, C; Röhrich, D; Skaali, B; Steinbeck, T M; Stock, Reinhard; Ullaland, K; Vestbø, A S; Wiebalck, A

    2003-01-01

    The central detectors of the ALICE experiment at LHC will produce a data size of up to 75 MB/event at an event rate less than approximately equals 200 Hz resulting in a data rate of similar to 15 GB/s. Online processing of the data is necessary in order to select interesting (sub)events ("High Level Trigger"), or to compress data efficiently by modeling techniques. Processing this data requires a massive parallel computing system (High Level Trigger System). The system will consist of a farm of clustered SMP-nodes based on off- the-shelf PCs connected with a high bandwidth low latency network.

  2. Development of parallel 3D discrete ordinates transport program on JASMIN framework

    International Nuclear Information System (INIS)

    Cheng, T.; Wei, J.; Shen, H.; Zhong, B.; Deng, L.

    2015-01-01

    A parallel 3D discrete ordinates radiation transport code JSNT-S is developed, aiming at simulating real-world radiation shielding and reactor physics applications in a reasonable time. Through the patch-based domain partition algorithm, the memory requirement is shared among processors and a space-angle parallel sweeping algorithm is developed based on data-driven algorithm. Acceleration methods such as partial current rebalance are implemented. The correctness is proved through the VENUS-3 and other benchmark models. In the radiation shielding calculation of the Qinshan-II reactor pressure vessel model with 24.3 billion DoF, only 88 seconds is required and the overall parallel efficiency of 44% is achieved on 1536 CPU cores. (author)

  3. Contribution to the algorithmic and efficient programming of new parallel architectures including accelerators for neutron physics and shielding computations

    International Nuclear Information System (INIS)

    Dubois, J.

    2011-01-01

    In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [fr

  4. An object-oriented bulk synchronous parallel library for multicore programming

    NARCIS (Netherlands)

    Yzelman, A.N.; Bisseling, R.H.

    2012-01-01

    We show that the bulk synchronous parallel (BSP) model, originally designed for distributed-memory systems, is also applicable for shared-memory multicore systems and, furthermore, that BSP libraries are useful in scientific computing on these systems. A proof-of-concept MulticoreBSP library has

  5. High level programming for the control of a tele operating mobile robot and with line following; Programacion de alto nivel para el control de un robot movil teleoperado y con seguimiento de linea

    Energy Technology Data Exchange (ETDEWEB)

    Bernal U, E. [Instituto Tecnologico de Toluca, Metepec, Estado de Mexico (Mexico)

    2006-07-01

    The TRASMAR automated vehicle was built with the purpose of transporting radioactive materials, it has a similar kinematic structure to that of a tricycle, in where the front wheel is the one in charge of offering the traction and direction, both rear wheels rotate freely and they are subject to a common axle. The electronic design was carried out being based on a MC68HC811 micro controller of the Motorola company. Of the characteristics that the robot possesses it stands out that it counts with an obstacle perception system through three ultrasonic sensors located in the front part of the vehicle to avoid collisions. The robot has two operation modes, the main mode is the manual, manipulated through a control by infrareds, although it can also move in autonomous way by means of the line pursuit technique using two reflective infrared sensors. As any other electronic system, the mobile robot required of improvements and upgrades. The modifications to be carried out were focused to the control stage. Its were intended as elements of upgrade the incorporation of the MC68HC912B32 micro controller and to replace the assembler language characteristic of this type of systems, by a high level language for micro controllers of this type, in this case the FORTH. In a same way it was implemented inside the program the function of the robot's displacement in an autonomous way by means of the line pursuit technique using control with fuzzy logic. The carried out work is distributed in the following way: In the chapter 1 the robot's characteristics are mentioned, as well as the objectives that thought about to the beginning of the project and the justifications that motivated the realization of this upgrade. In the chapters 2 at 5 are presented in a theoretical way the supports used for the the robot's upgrade, as the used modules of the micro controller, those main characteristics of the FORTH language, the theory of the fuzzy logic and the design of the stage

  6. High level programming for the control of a tele operating mobile robot and with line following; Programacion de alto nivel para el control de un robot movil teleoperado y con seguimiento de linea

    Energy Technology Data Exchange (ETDEWEB)

    Bernal U, E [Instituto Tecnologico de Toluca, Metepec, Estado de Mexico (Mexico)

    2006-07-01

    The TRASMAR automated vehicle was built with the purpose of transporting radioactive materials, it has a similar kinematic structure to that of a tricycle, in where the front wheel is the one in charge of offering the traction and direction, both rear wheels rotate freely and they are subject to a common axle. The electronic design was carried out being based on a MC68HC811 micro controller of the Motorola company. Of the characteristics that the robot possesses it stands out that it counts with an obstacle perception system through three ultrasonic sensors located in the front part of the vehicle to avoid collisions. The robot has two operation modes, the main mode is the manual, manipulated through a control by infrareds, although it can also move in autonomous way by means of the line pursuit technique using two reflective infrared sensors. As any other electronic system, the mobile robot required of improvements and upgrades. The modifications to be carried out were focused to the control stage. Its were intended as elements of upgrade the incorporation of the MC68HC912B32 micro controller and to replace the assembler language characteristic of this type of systems, by a high level language for micro controllers of this type, in this case the FORTH. In a same way it was implemented inside the program the function of the robot's displacement in an autonomous way by means of the line pursuit technique using control with fuzzy logic. The carried out work is distributed in the following way: In the chapter 1 the robot's characteristics are mentioned, as well as the objectives that thought about to the beginning of the project and the justifications that motivated the realization of this upgrade. In the chapters 2 at 5 are presented in a theoretical way the supports used for the the robot's upgrade, as the used modules of the micro controller, those main characteristics of the FORTH language, the theory of the fuzzy logic and the design of the stage of power that

  7. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  8. Implementing the PM Programming Language using MPI and OpenMP - a New Tool for Programming Geophysical Models on Parallel Systems

    Science.gov (United States)

    Bellerby, Tim

    2015-04-01

    PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks number of processors

  9. High-level nuclear waste disposal

    International Nuclear Information System (INIS)

    Burkholder, H.C.

    1985-01-01

    The meeting was timely because many countries had begun their site selection processes and their engineering designs were becoming well-defined. The technology of nuclear waste disposal was maturing, and the institutional issues arising from the implementation of that technology were being confronted. Accordingly, the program was structured to consider both the technical and institutional aspects of the subject. The meeting started with a review of the status of the disposal programs in eight countries and three international nuclear waste management organizations. These invited presentations allowed listeners to understand the similarities and differences among the various national approaches to solving this very international problem. Then seven invited presentations describing nuclear waste disposal from different perspectives were made. These included: legal and judicial, electric utility, state governor, ethical, and technical perspectives. These invited presentations uncovered several issues that may need to be resolved before high-level nuclear wastes can be emplaced in a geologic repository in the United States. Finally, there were sixty-six contributed technical presentations organized in ten sessions around six general topics: site characterization and selection, repository design and in-situ testing, package design and testing, disposal system performance, disposal and storage system cost, and disposal in the overall waste management system context. These contributed presentations provided listeners with the results of recent applied RandD in each of the subject areas

  10. Answers to your questions on high-level nuclear waste

    International Nuclear Information System (INIS)

    1987-11-01

    This booklet contains answers to frequently asked questions about high-level nuclear wastes. Written for the layperson, the document contains basic information on the hazards of radiation, the Nuclear Waste Management Program, the proposed geologic repository, the proposed monitored retrievable storage facility, risk assessment, and public participation in the program

  11. Extending Java for High-Level Web Service Construction

    DEFF Research Database (Denmark)

    Christensen, Aske Simon; Møller, Anders; Schwartzbach, Michael Ignatieff

    2003-01-01

    We incorporate innovations from the project into the Java language to provide high-level features for Web service programming. The resulting language, JWIG, contains an advanced session model and a flexible mechanism for dynamic construction of XML documents, in particular XHTML. To support program...

  12. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  13. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  14. Compiling the functional data-parallel language SaC for Microgrids of Self-Adaptive Virtual Processors

    NARCIS (Netherlands)

    Grelck, C.; Herhut, S.; Jesshope, C.; Joslin, C.; Lankamp, M.; Scholz, S.-B.; Shafarenko, A.

    2009-01-01

    We present preliminary results from compiling the high-level, functional and data-parallel programming language SaC into a novel multi-core design: Microgrids of Self-Adaptive Virtual Processors (SVPs). The side-effect free nature of SaC in conjunction with its data-parallel foundation make it an

  15. Neurite, a finite difference large scale parallel program for the simulation of electrical signal propagation in neurites under mechanical loading.

    Directory of Open Access Journals (Sweden)

    Julián A García-Grajales

    Full Text Available With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon

  16. Vectorization and parallelization of Monte-Carlo programs for calculation of radiation transport

    International Nuclear Information System (INIS)

    Seidel, R.

    1995-01-01

    The versatile MCNP-3B Monte-Carlo code written in FORTRAN77, for simulation of the radiation transport of neutral particles, has been subjected to vectorization and parallelization of essential parts, without touching its versatility. Vectorization is not dependent on a specific computer. Several sample tasks have been selected in order to test the vectorized MCNP-3B code in comparison to the scalar MNCP-3B code. The samples are a representative example of the 3-D calculations to be performed for simulation of radiation transport in neutron and reactor physics. (1) 4πneutron detector. (2) High-energy calorimeter. (3) PROTEUS benchmark (conversion rates and neutron multiplication factors for the HCLWR (High Conversion Light Water Reactor)). (orig./HP) [de

  17. Portable Parallel Programming for the Dynamic Load Balancing of Unstructured Grid Applications

    Science.gov (United States)

    Biswas, Rupak; Das, Sajal K.; Harvey, Daniel; Oliker, Leonid

    1999-01-01

    The ability to dynamically adapt an unstructured -rid (or mesh) is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult, particularly from the view point of portability on various multiprocessor platforms We address this problem by developing PLUM, tin automatic anti architecture-independent framework for adaptive numerical computations in a message-passing environment. Portability is demonstrated by comparing performance on an SP2, an Origin2000, and a T3E, without any code modifications. We also present a general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication pattern, with a goal to providing a global view of system loads across processors. Experiments on, an SP2 and an Origin2000 demonstrate the portability of our approach which achieves superb load balance at the cost of minimal extra overhead.

  18. Contributions for the optimization of the extensibility of parallel programing of turbulent plasmas

    International Nuclear Information System (INIS)

    Rozar, F.

    2015-01-01

    The work realized through this thesis focuses on the optimization of the Gysela code which simulates a plasma turbulence. Optimization of a scientific application concerns mainly one of the three following points: 1) the simulation of larger meshes, 2) the reduction of computing time and 3) the enhancement of the computation accuracy. The first part of this manuscript presents the contributions relative to the simulation of larger mesh. Alike many simulation codes, getting more realistic simulations is often analogous to rene the meshes. The finer the mesh the larger the memory consumption. Moreover, during these last few years, the supercomputers had trend to provide less and less memory per computer core. For these reasons, we have developed a library, the libMTM (Modeling and Tracing Memory), dedicated to study precisely the memory consumption of parallel softwares. The libMTM tools allowed us to reduce the memory consumption of Gysela and to study its scalability. As far as we know, there is no other tool which provides equivalent features which allow the memory scalability study. The second part of the manuscript presents the works relative to the optimization of the computation time and the improvement of accuracy of the gyro-average operator. This operator represents a corner stone of the gyrokinetic model which is used by the Gysela application. The improvement of accuracy emanates from a change in the computing method: a scheme based on a 2D Hermite interpolation substitutes the Pade approximation. Although the new version of the gyro-average operator is more accurate, it is also more expensive in computation time than the former one. In order to keep the simulation in reasonable time, different optimizations have been performed on the new computing method to get it competitive. Finally, we have developed a MPI parallelized version of the new gyro-average operator. The good scalability of this new gyro-average computer will allow, eventually, a reduction

  19. Parallel Programming Application to Matrix Algebra in the Spectral Method for Control Systems Analysis, Synthesis and Identification

    Directory of Open Access Journals (Sweden)

    V. Yu. Kleshnin

    2016-01-01

    Full Text Available The article describes the matrix algebra libraries based on the modern technologies of parallel programming for the Spectrum software, which can use a spectral method (in the spectral form of mathematical description to analyse, synthesise and identify deterministic and stochastic dynamical systems. The developed matrix algebra libraries use the following technologies for the GPUs: OmniThreadLibrary, OpenMP, Intel Threading Building Blocks, Intel Cilk Plus for CPUs nVidia CUDA, OpenCL, and Microsoft Accelerated Massive Parallelism.The developed libraries support matrices with real elements (single and double precision. The matrix dimensions are limited by 32-bit or 64-bit memory model and computer configuration. These libraries are general-purpose and can be used not only for the Spectrum software. They can also find application in the other projects where there is a need to perform operations with large matrices.The article provides a comparative analysis of the libraries developed for various matrix operations (addition, subtraction, scalar multiplication, multiplication, powers of matrices, tensor multiplication, transpose, inverse matrix, finding a solution of the system of linear equations through the numerical experiments using different CPU and GPU. The article contains sample programs and performance test results for matrix multiplication, which requires most of all computational resources in regard to the other operations.

  20. Other-than-high-level waste

    International Nuclear Information System (INIS)

    Bray, G.R.

    1976-01-01

    The main emphasis of the work in the area of partitioning transuranic elements from waste has been in the area of high-level liquid waste. But there are ''other-than-high-level wastes'' generated by the back end of the nuclear fuel cycle that are both large in volume and contaminated with significant quantities of transuranic elements. The combined volume of these other wastes is approximately 50 times that of the solidified high-level waste. These other wastes also contain up to 75% of the transuranic elements associated with waste generated by the back end of the fuel cycle. Therefore, any detailed evaluation of partitioning as a viable waste management option must address both high-level wastes and ''other-than-high-level wastes.''

  1. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  2. Research and development plans for disposal of high-level and transuranic wastes

    International Nuclear Information System (INIS)

    Bartlett, J.W.; Platt, A.M.

    1978-09-01

    This plan recommends a 20-year, 206 million (1975 $'s) R and D program on geologic structures in the contiguous U.S. and on the midplate Pacific seabed with the objective of developing an acceptable method for disposal of commercial high-level and transuranic wastes by 1997. No differentiation between high-level and transuranic waste disposal is made in the first 5 years of the program. A unique application of probability theory to R and D planning establishes, at a 95% confidence level, that the program objective will be met if at least fifteen generic options and five specific disposal sites are explored in detail and at least two pilot plants are constructed and operated. A parallel effort on analysis and evaluation maximizes information available for decisions on the acceptability of the disposal techniques. Based on considerations of technical feasibility, timing and technical risk, the other disposal concepts, e.g., ice sheets, partitioning, transmutation and space disposal cited in BNWL-1900 are not recommended for near future R and D

  3. Managing the nation's commercial high-level radioactive waste

    International Nuclear Information System (INIS)

    1985-03-01

    This report presents the findings and conclusions of OTA's analysis of Federal policy for the management of commercial high-level radioactive waste. It represents a major update and expansion of the Analysis presented to Congress in our summary report, Managing Commercial High-Level Radioactive Waste, published in April of 1982 (NWPA). This new report is intended to contribute to the implementation of NWPA, and in particular to Congressional review of three major documents that DOE will submit to the 99th Congress: a Mission Plan for the waste management program; a monitored retrievable storage (MRS) proposal; and a report on mechanisms for financing and managing the waste program. The assessment was originally focused on the ocean disposal of nuclear waste. OTA later broadened the study to include all aspects of high-level waste disposal. The major findings of the original analysis were published in OTA's 1982 summary report

  4. SIGWX Charts - High Level Significant Weather

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — High level significant weather (SIGWX) forecasts are provided for the en-route portion of international flights. NOAA's National Weather Service Aviation Center...

  5. High-level radioactive waste repositories site selection plan

    International Nuclear Information System (INIS)

    Castanon, A.; Recreo, F.

    1985-01-01

    A general vision of the high level nuclear waste (HLNW) and/or nuclear spent fuel facilities site selection processes is given, according to the main international nuclear safety regulatory organisms quidelines and the experience from those countries which have reached a larger development of their national nuclear programs. (author)

  6. High-level manpower movement and Japan's foreign aid.

    Science.gov (United States)

    Furuya, K

    1992-01-01

    "Japan's technical assistance programs to Asian countries are summarized. Movements of high-level manpower accompanying direct foreign investments by private enterprise are also reviewed. Proposals for increased human resources development include education and training of foreigners in Japan as well as the training of Japanese aid experts and the development of networks for information exchange." excerpt

  7. Recovering method for high level radioactive material

    International Nuclear Information System (INIS)

    Fukui, Toshiki

    1998-01-01

    Offgas filters such as of nuclear fuel reprocessing facilities and waste control facilities are burnt, and the burnt ash is melted by heating, and then the molten ashes are brought into contact with a molten metal having a low boiling point to transfer the high level radioactive materials in the molten ash to the molten metal. Then, only the molten metal is evaporated and solidified by drying, and residual high level radioactive materials are recovered. According to this method, the high level radioactive materials in the molten ashes are transferred to the molten metal and separated by the difference of the distribution rate of the molten ash and the molten metal. Subsequently, the molten metal to which the high level radioactive materials are transferred is heated to a temperature higher than the boiling point so that only the molten metal is evaporated and dried to be removed, and residual high level radioactive materials are recovered easily. On the other hand, the molten ash from which the high level radioactive material is removed can be discarded as ordinary industrial wastes as they are. (T.M.)

  8. High level trigger system for the ALICE experiment

    International Nuclear Information System (INIS)

    Frankenfeld, U.; Roehrich, D.; Ullaland, K.; Vestabo, A.; Helstrup, H.; Lien, J.; Lindenstruth, V.; Schulz, M.; Steinbeck, T.; Wiebalck, A.; Skaali, B.

    2001-01-01

    The ALICE experiment at the Large Hadron Collider (LHC) at CERN will detect up to 20,000 particles in a single Pb-Pb event resulting in a data rate of ∼75 MByte/event. The event rate is limited by the bandwidth of the data storage system. Higher rates are possible by selecting interesting events and subevents (High Level trigger) or compressing the data efficiently with modeling techniques. Both require a fast parallel pattern recognition. One possible solution to process the detector data at such rates is a farm of clustered SMP nodes, based on off-the-shelf PCs, and connected by a high bandwidth, low latency network

  9. FPGA Co-processor for the ALICE High Level Trigger

    CERN Document Server

    Grastveit, G.; Lindenstruth, V.; Loizides, C.; Roehrich, D.; Skaali, B.; Steinbeck, T.; Stock, R.; Tilsner, H.; Ullaland, K.; Vestbo, A.; Vik, T.

    2003-01-01

    The High Level Trigger (HLT) of the ALICE experiment requires massive parallel computing. One of the main tasks of the HLT system is two-dimensional cluster finding on raw data of the Time Projection Chamber (TPC), which is the main data source of ALICE. To reduce the number of computing nodes needed in the HLT farm, FPGAs, which are an intrinsic part of the system, will be utilized for this task. VHDL code implementing the Fast Cluster Finder algorithm, has been written, a testbed for functional verification of the code has been developed, and the code has been synthesized

  10. Speeding Up the String Comparison of the IDS Snort using Parallel Programming: A Systematic Literature Review on the Parallelized Aho-Corasick Algorithm

    Directory of Open Access Journals (Sweden)

    SILVA JUNIOR,J. B.

    2016-12-01

    Full Text Available The Intrusion Detection System (IDS needs to compare the contents of all packets arriving at the network interface with a set of signatures for indicating possible attacks, a task that consumes much CPU processing time. In order to alleviate this problem, some researchers have tried to parallelize the IDS's comparison engine, transferring execution from the CPU to GPU. This paper identifies and maps the parallelization features of the Aho-Corasick algorithm, which is used in Snort to compare patterns, in order to show this algorithm's implementation and execution issues, as well as optimization techniques for the Aho-Corasick machine. We have found 147 papers from important computer science publications databases, and have mapped them. We selected 22 and analyzed them in order to find our results. Our analysis of the papers showed, among other results, that parallelization of the AC algorithm is a new task and the authors have focused on the State Transition Table as the most common way to implement the algorithm on the GPU. Furthermore, we found that some techniques speed up the algorithm and reduce the required machine storage space are highly used, such as the algorithm running on the fastest memories and mechanisms for reducing the number of nodes and bit maping.

  11. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  12. Comparative Study of Dynamic Programming and Pontryagin’s Minimum Principle on Energy Management for a Parallel Hybrid Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Huei Peng

    2013-04-01

    Full Text Available This paper compares two optimal energy management methods for parallel hybrid electric vehicles using an Automatic Manual Transmission (AMT. A control-oriented model of the powertrain and vehicle dynamics is built first. The energy management is formulated as a typical optimal control problem to trade off the fuel consumption and gear shifting frequency under admissible constraints. The Dynamic Programming (DP and Pontryagin’s Minimum Principle (PMP are applied to obtain the optimal solutions. Tuning with the appropriate co-states, the PMP solution is found to be very close to that from DP. The solution for the gear shifting in PMP has an algebraic expression associated with the vehicular velocity and can be implemented more efficiently in the control algorithm. The computation time of PMP is significantly less than DP.

  13. Design strategies for irregularly adapting parallel applications

    International Nuclear Information System (INIS)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal

    2000-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability

  14. Nondestructive examination of DOE high-level waste storage tanks

    International Nuclear Information System (INIS)

    Bush, S.; Bandyopadhyay, K.; Kassir, M.; Mather, B.; Shewmon, P.; Streicher, M.; Thompson, B.; van Rooyen, D.; Weeks, J.

    1995-01-01

    A number of DOE sites have buried tanks containing high-level waste. Tanks of particular interest am double-shell inside concrete cylinders. A program has been developed for the inservice inspection of the primary tank containing high-level waste (HLW), for testing of transfer lines and for the inspection of the concrete containment where possible. Emphasis is placed on the ultrasonic examination of selected areas of the primary tank, coupled with a leak-detection system capable of detecting small leaks through the wall of the primary tank. The NDE program is modelled after ASME Section XI in many respects, particularly with respects to the sampling protocol. Selected testing of concrete is planned to determine if there has been any significant degradation. The most probable failure mechanisms are corrosion-related so that the examination program gives major emphasis to possible locations for corrosion attack

  15. EAP high-level product architecture

    DEFF Research Database (Denmark)

    Guðlaugsson, Tómas Vignir; Mortensen, Niels Henrik; Sarban, Rahimullah

    2013-01-01

    EAP technology has the potential to be used in a wide range of applications. This poses the challenge to the EAP component manufacturers to develop components for a wide variety of products. Danfoss Polypower A/S is developing an EAP technology platform, which can form the basis for a variety...... of EAP technology products while keeping complexity under control. High level product architecture has been developed for the mechanical part of EAP transducers, as the foundation for platform development. A generic description of an EAP transducer forms the core of the high level product architecture...... the function of the EAP transducers to be changed, by basing the EAP transducers on a different combination of organ alternatives. A model providing an overview of the high level product architecture has been developed to support daily development and cooperation across development teams. The platform approach...

  16. High-Level Application Framework for LCLS

    Energy Technology Data Exchange (ETDEWEB)

    Chu, P; Chevtsov, S.; Fairley, D.; Larrieu, C.; Rock, J.; Rogind, D.; White, G.; Zalazny, M.; /SLAC

    2008-04-22

    A framework for high level accelerator application software is being developed for the Linac Coherent Light Source (LCLS). The framework is based on plug-in technology developed by an open source project, Eclipse. Many existing functionalities provided by Eclipse are available to high-level applications written within this framework. The framework also contains static data storage configuration and dynamic data connectivity. Because the framework is Eclipse-based, it is highly compatible with any other Eclipse plug-ins. The entire infrastructure of the software framework will be presented. Planned applications and plug-ins based on the framework are also presented.

  17. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  18. Introducing PROFESS 2.0: A parallelized, fully linear scaling program for orbital-free density functional theory calculations

    Science.gov (United States)

    Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.

    2010-12-01

    Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer

  19. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Barbara Chapman

    2012-02-01

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

  20. The management of high-level radioactive wastes

    International Nuclear Information System (INIS)

    Lennemann, Wm.L.

    1979-01-01

    The definition of high-level radioactive wastes is given. The following aspects of high-level radioactive wastes' management are discussed: fuel reprocessing and high-level waste; storage of high-level liquid waste; solidification of high-level waste; interim storage of solidified high-level waste; disposal of high-level waste; disposal of irradiated fuel elements as a waste

  1. High-level radioactive wastes. Supplement 1

    International Nuclear Information System (INIS)

    McLaren, L.H.

    1984-09-01

    This bibliography contains information on high-level radioactive wastes included in the Department of Energy's Energy Data Base from August 1982 through December 1983. These citations are to research reports, journal articles, books, patents, theses, and conference papers from worldwide sources. Five indexes, each preceded by a brief description, are provided: Corporate Author, Personal Author, Subject, Contract Number, and Report Number. 1452 citations

  2. PAIRWISE BLENDING OF HIGH LEVEL WASTE

    International Nuclear Information System (INIS)

    CERTA, P.J.

    2006-01-01

    The primary objective of this study is to demonstrate a mission scenario that uses pairwise and incidental blending of high level waste (HLW) to reduce the total mass of HLW glass. Secondary objectives include understanding how recent refinements to the tank waste inventory and solubility assumptions affect the mass of HLW glass and how logistical constraints may affect the efficacy of HLW blending

  3. Materials for high-level waste containment

    International Nuclear Information System (INIS)

    Marsh, G.P.

    1982-01-01

    The function of the high-level radioactive waste container in storage and of a container/overpack combination in disposal is considered. The consequent properties required from potential fabrication materials are discussed. The strategy adopted in selecting containment materials and the experimental programme underway to evaluate them are described. (U.K.)

  4. Current high-level waste solidification technology

    International Nuclear Information System (INIS)

    Bonner, W.F.; Ross, W.A.

    1976-01-01

    Technology has been developed in the U.S. and abroad for solidification of high-level waste from nuclear power production. Several processes have been demonstrated with actual radioactive waste and are now being prepared for use in the commercial nuclear industry. Conversion of the waste to a glass form is favored because of its high degree of nondispersibility and safety

  5. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  6. Evaluation and selection of candidate high-level waste forms

    International Nuclear Information System (INIS)

    1982-03-01

    Seven candidate waste forms being developed under the direction of the Department of Energy's National High-Level Waste (HLW) Technology Program, were evaluated as potential media for the immobilization and geologic disposal of high-level nuclear wastes. The evaluation combined preliminary waste form evaluations conducted at DOE defense waste-sites and independent laboratories, peer review assessments, a product performance evaluation, and a processability analysis. Based on the combined results of these four inputs, two of the seven forms, borosilicate glass and a titanate based ceramic, SYNROC, were selected as the reference and alternative forms for continued development and evaluation in the National HLW Program. Both the glass and ceramic forms are viable candidates for use at each of the DOE defense waste-sites; they are also potential candidates for immobilization of commercial reprocessing wastes. This report describes the waste form screening process, and discusses each of the four major inputs considered in the selection of the two forms

  7. Multipurpose optimization models for high level waste vitrification

    International Nuclear Information System (INIS)

    Hoza, M.

    1994-08-01

    Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification

  8. Research strategy and programs about the management of high-level and long-lived radioactive wastes (by right of the article L542 of the environment law and belonging to the December 30, 1991 law)

    International Nuclear Information System (INIS)

    2002-01-01

    This document has been prepared by the French public organizations on charge of the researches about the management of radioactive wastes in the framework of the law from December 30, 1991. It was presented at the national commission of evaluation on March 6, 2002. It comprises 6 chapters dealing with: 1 - the methodology, structuration and implementation of researches: main goals; products of the back-end of the fuel cycle and evaluation of fluxes; technical structuration of programs; researches consistency, complementarity and priority; criteria of appreciation of researches relevance; 2 - the main results after 10 years of researches in the framework of the 1991 law: abatement of wastes noxiousness; wastes conditioning; long-term storage; studies on geological disposal; 3 - the main steps towards 2006: separation and transmutation; underground disposal; conditioning and long-term behaviour; 4 - presentation and analysis of research programs: separation-transmutation; feasibility of a deep geologic disposal (clay, granite); conditioning and storage (containers, storage and long-term behaviour); 5 - coordination: authorities, share of data, research programs; 6 - international collaborations; appendixes. (J.S.)

  9. Timing of High-level Waste Disposal

    International Nuclear Information System (INIS)

    2008-01-01

    This study identifies key factors influencing the timing of high-level waste (HLW) disposal and examines how social acceptability, technical soundness, environmental responsibility and economic feasibility impact on national strategies for HLW management and disposal. Based on case study analyses, it also presents the strategic approaches adopted in a number of national policies to address public concerns and civil society requirements regarding long-term stewardship of high-level radioactive waste. The findings and conclusions of the study confirm the importance of informing all stakeholders and involving them in the decision-making process in order to implement HLW disposal strategies successfully. This study will be of considerable interest to nuclear energy policy makers and analysts as well as to experts in the area of radioactive waste management and disposal. (author)

  10. Disposal of high-level radioactive waste

    International Nuclear Information System (INIS)

    Glasby, G.P.

    1977-01-01

    Although controversy surrounding the possible introduction of nuclear power into New Zealand has raised many points including radiation hazards, reactor safety, capital costs, sources of uranium and earthquake risks on the one hand versus energy conservation and alternative sources of energy on the other, one problem remains paramount and is of global significance - the storage and dumping of the high-level radioactive wastes of the reactor core. The generation of abundant supplies of energy now in return for the storage of these long-lived highly radioactive wastes has been dubbed the so-called Faustian bargain. This article discusses the growth of the nuclear industry and its implications to high-level waste disposal particularly in the deep-sea bed. (auth.)

  11. High-Level Waste Melter Study Report

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Joseph M.; Bickford, Dennis F.; Day, Delbert E.; Kim, Dong-Sang; Lambert, Steven L.; Marra, Sharon L.; Peeler, David K.; Strachan, Denis M.; Triplett, Mark B.; Vienna, John D.; Wittman, Richard S.

    2001-07-13

    At the Hanford Site in Richland, Washington, the path to site cleanup involves vitrification of the majority of the wastes that currently reside in large underground tanks. A Joule-heated glass melter is the equipment of choice for vitrifying the high-level fraction of these wastes. Even though this technology has general national and international acceptance, opportunities may exist to improve or change the technology to reduce the enormous cost of accomplishing the mission of site cleanup. Consequently, the U.S. Department of Energy requested the staff of the Tanks Focus Area to review immobilization technologies, waste forms, and modifications to requirements for solidification of the high-level waste fraction at Hanford to determine what aspects could affect cost reductions with reasonable long-term risk. The results of this study are summarized in this report.

  12. High-level radioactive wastes. Supplement 1

    Energy Technology Data Exchange (ETDEWEB)

    McLaren, L.H. (ed.)

    1984-09-01

    This bibliography contains information on high-level radioactive wastes included in the Department of Energy's Energy Data Base from August 1982 through December 1983. These citations are to research reports, journal articles, books, patents, theses, and conference papers from worldwide sources. Five indexes, each preceded by a brief description, are provided: Corporate Author, Personal Author, Subject, Contract Number, and Report Number. 1452 citations.

  13. High-level waste processing and disposal

    International Nuclear Information System (INIS)

    Crandall, J.L.; Krause, H.; Sombret, C.; Uematsu, K.

    1984-01-01

    The national high-level waste disposal plans for France, the Federal Republic of Germany, Japan, and the United States are covered. Three conclusions are reached. The first conclusion is that an excellent technology already exists for high-level waste disposal. With appropriate packaging, spent fuel seems to be an acceptable waste form. Borosilicate glass reprocessing waste forms are well understood, in production in France, and scheduled for production in the next few years in a number of other countries. For final disposal, a number of candidate geological repository sites have been identified and several demonstration sites opened. The second conclusion is that adequate financing and a legal basis for waste disposal are in place in most countries. Costs of high-level waste disposal will probably add about 5 to 10% to the costs of nuclear electric power. The third conclusion is less optimistic. Political problems remain formidable in highly conservative regulations, in qualifying a final disposal site, and in securing acceptable transport routes

  14. Handbook of high-level radioactive waste transportation

    International Nuclear Information System (INIS)

    Sattler, L.R.

    1992-10-01

    The High-Level Radioactive Waste Transportation Handbook serves as a reference to which state officials and members of the general public may turn for information on radioactive waste transportation and on the federal government's system for transporting this waste under the Civilian Radioactive Waste Management Program. The Handbook condenses and updates information contained in the Midwestern High-Level Radioactive Waste Transportation Primer. It is intended primarily to assist legislators who, in the future, may be called upon to enact legislation pertaining to the transportation of radioactive waste through their jurisdictions. The Handbook is divided into two sections. The first section places the federal government's program for transporting radioactive waste in context. It provides background information on nuclear waste production in the United States and traces the emergence of federal policy for disposing of radioactive waste. The second section covers the history of radioactive waste transportation; summarizes major pieces of legislation pertaining to the transportation of radioactive waste; and provides an overview of the radioactive waste transportation program developed by the US Department of Energy (DOE). To supplement this information, a summary of pertinent federal and state legislation and a glossary of terms are included as appendices, as is a list of publications produced by the Midwestern Office of The Council of State Governments (CSG-MW) as part of the Midwestern High-Level Radioactive Waste Transportation Project

  15. Method of parallel processing in SANPO real time system

    International Nuclear Information System (INIS)

    Ostrovnoj, A.I.; Salamatin, I.M.

    1981-01-01

    A method of parellel processing in SANPO real time system is described. Algorithms of data accumulation and preliminary processing in this system as a parallel processes using a specialized high level programming language are described. Hierarchy of elementary processes are also described. It provides the synchronization of concurrent processes without semaphors. The developed means are applied to the systems of experiment automation using SM-3 minicomputers [ru

  16. High level language for measurement complex control based on the computer E-100I

    Science.gov (United States)

    Zubkov, B. V.

    1980-01-01

    A high level language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested high level language is described.

  17. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  18. Cermets for high level waste containment

    International Nuclear Information System (INIS)

    Aaron, W.S.; Quinby, T.C.; Kobisk, E.H.

    1978-01-01

    Cermet materials are currently under investigation as an alternate for the primary containment of high level wastes. The cermet in this study is an iron--nickel base metal matrix containing uniformly dispersed, micron-size fission product oxides, aluminosilicates, and titanates. Cermets possess high thermal conductivity, and typical waste loading of 70 wt % with volume reduction factors of 2 to 200 and low processing volatility losses have been realized. Preliminary leach studies indicate a leach resistance comparable to other candidate waste forms; however, more quantitative data are required. Actual waste studies have begun on NFS Acid Thorex, SRP dried sludge and fresh, unneutralized SRP process wastes

  19. The CMS High-Level Trigger

    International Nuclear Information System (INIS)

    Covarelli, R.

    2009-01-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the 'High-Level Trigger'(HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, τ leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  20. The CMS High-Level Trigger

    CERN Document Server

    Covarelli, Roberto

    2009-01-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the "High-Level Trigger" (HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, tau leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  1. The CMS High-Level Trigger

    Science.gov (United States)

    Covarelli, R.

    2009-12-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the "High-Level Trigger" (HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, τ leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  2. Service Oriented Architecture for High Level Applications

    International Nuclear Information System (INIS)

    Chu, P.

    2012-01-01

    Standalone high level applications often suffer from poor performance and reliability due to lengthy initialization, heavy computation and rapid graphical update. Service-oriented architecture (SOA) is trying to separate the initialization and computation from applications and to distribute such work to various service providers. Heavy computation such as beam tracking will be done periodically on a dedicated server and data will be available to client applications at all time. Industrial standard service architecture can help to improve the performance, reliability and maintainability of the service. Robustness will also be improved by reducing the complexity of individual client applications.

  3. HIGH-LEVEL CONTROL SYSTEM IN C*

    International Nuclear Information System (INIS)

    Nishimura, Hiroshi; Timossi, Chris; Portmann, Greg; Urashka, Michael; Ikami, Craig; Beaudrow, M.

    2008-01-01

    We have started upgrading the control room programs for the injector at the Advanced Light Source (ALS). We chose to program in C* exclusively on the .NET Framework to create EPICS client programs on Windows Vista PCs. This paper reports the status of this upgrade project

  4. ParaHaplo 3.0: A program package for imputation and a haplotype-based whole-genome association study using hybrid parallel computing

    Directory of Open Access Journals (Sweden)

    Kamatani Naoyuki

    2011-05-01

    Full Text Available Abstract Background Use of missing genotype imputations and haplotype reconstructions are valuable in genome-wide association studies (GWASs. By modeling the patterns of linkage disequilibrium in a reference panel, genotypes not directly measured in the study samples can be imputed and used for GWASs. Since millions of single nucleotide polymorphisms need to be imputed in a GWAS, faster methods for genotype imputation and haplotype reconstruction are required. Results We developed a program package for parallel computation of genotype imputation and haplotype reconstruction. Our program package, ParaHaplo 3.0, is intended for use in workstation clusters using the Intel Message Passing Interface. We compared the performance of ParaHaplo 3.0 on the Japanese in Tokyo, Japan and Han Chinese in Beijing, and Chinese in the HapMap dataset. A parallel version of ParaHaplo 3.0 can conduct genotype imputation 20 times faster than a non-parallel version of ParaHaplo. Conclusions ParaHaplo 3.0 is an invaluable tool for conducting haplotype-based GWASs. The need for faster genotype imputation and haplotype reconstruction using parallel computing will become increasingly important as the data sizes of such projects continue to increase. ParaHaplo executable binaries and program sources are available at http://en.sourceforge.jp/projects/parallelgwas/releases/.

  5. Evaluation of conditioned high-level waste forms

    International Nuclear Information System (INIS)

    Mendel, J.E.; Turcotte, R.P.; Chikalla, T.D.; Hench, L.L.

    1983-01-01

    The evaluation of conditioned high-level waste forms requires an understanding of radiation and thermal effects, mechanical properties, volatility, and chemical durability. As a result of nuclear waste research and development programs in many countries, a good understanding of these factors is available for borosilicate glass containing high-level waste. The IAEA through its coordinated research program has contributed to this understanding. Methods used in the evaluation of conditioned high-level waste forms are reviewed. In the US, this evaluation has been facilitated by the definition of standard test methods by the Materials Characterization Center (MCC), which was established by the Department of Energy (DOE) in 1979. The DOE has also established a 20-member Materials Review Board to peer-review the activities of the MCC. In addition to comparing waste forms, testing must be done to evaluate the behavior of waste forms in geologic repositories. Such testing is complex; accelerated tests are required to predict expected behavior for thousands of years. The tests must be multicomponent tests to ensure that all potential interactions between waste form, canister/overpack and corrosion products, backfill, intruding ground water and the repository rock, are accounted for. An overview of the status of such multicomponent testing is presented

  6. The ALICE Dimuon Spectrometer High Level Trigger

    CERN Document Server

    Becker, B; Cicalo, Corrado; Das, Indranil; de Vaux, Gareth; Fearick, Roger; Lindenstruth, Volker; Marras, Davide; Sanyal, Abhijit; Siddhanta, Sabyasachi; Staley, Florent; Steinbeck, Timm; Szostak, Artur; Usai, Gianluca; Vilakazi, Zeblon

    2009-01-01

    The ALICE Dimuon Spectrometer High Level Trigger (dHLT) is an on-line processing stage whose primary function is to select interesting events that contain distinct physics signals from heavy resonance decays such as J/psi and Gamma particles, amidst unwanted background events. It forms part of the High Level Trigger of the ALICE experiment, whose goal is to reduce the large data rate of about 25 GB/s from the ALICE detectors by an order of magnitude, without loosing interesting physics events. The dHLT has been implemented as a software trigger within a high performance and fault tolerant data transportation framework, which is run on a large cluster of commodity compute nodes. To reach the required processing speeds, the system is built as a concurrent system with a hierarchy of processing steps. The main algorithms perform partial event reconstruction, starting with hit reconstruction on the level of the raw data received from the spectrometer. Then a tracking algorithm finds track candidates from the recon...

  7. Technetium Chemistry in High-Level Waste

    International Nuclear Information System (INIS)

    Hess, Nancy J.

    2006-01-01

    Tc contamination is found within the DOE complex at those sites whose mission involved extraction of plutonium from irradiated uranium fuel or isotopic enrichment of uranium. At the Hanford Site, chemical separations and extraction processes generated large amounts of high level and transuranic wastes that are currently stored in underground tanks. The waste from these extraction processes is currently stored in underground High Level Waste (HLW) tanks. However, the chemistry of the HLW in any given tank is greatly complicated by repeated efforts to reduce volume and recover isotopes. These processes ultimately resulted in mixing of waste streams from different processes. As a result, the chemistry and the fate of Tc in HLW tanks are not well understood. This lack of understanding has been made evident in the failed efforts to leach Tc from sludge and to remove Tc from supernatants prior to immobilization. Although recent interest in Tc chemistry has shifted from pretreatment chemistry to waste residuals, both needs are served by a fundamental understanding of Tc chemistry

  8. Processing vessel for high level radioactive wastes

    International Nuclear Information System (INIS)

    Maekawa, Hiromichi

    1998-01-01

    Upon transferring an overpack having canisters containing high level radioactive wastes sealed therein and burying it into an underground processing hole, an outer shell vessel comprising a steel plate to be fit and contained in the processing hole is formed. A bury-back layer made of dug earth and sand which had been discharged upon forming the processing hole is formed on the inner circumferential wall of the outer shell vessel. A buffer layer having a predetermined thickness is formed on the inner side of the bury-back layer, and the overpack is contained in the hollow portion surrounded by the layer. The opened upper portion of the hollow portion is covered with the buffer layer and the bury-back layer. Since the processing vessel having a shielding performance previously formed on the ground, the state of packing can be observed. In addition, since an operator can directly operates upon transportation and burying of the high level radioactive wastes, remote control is no more necessary. (T.M.)

  9. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  10. Flexibility and Performance of Parallel File Systems

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1996-01-01

    As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

  11. QSPIN: A High Level Java API for Quantum Computing Experimentation

    Science.gov (United States)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  12. Leveraging Parallel Data Processing Frameworks with Verified Lifting

    Directory of Open Access Journals (Sweden)

    Maaz Bin Safeer Ahmad

    2016-11-01

    Full Text Available Many parallel data frameworks have been proposed in recent years that let sequential programs access parallel processing. To capitalize on the benefits of such frameworks, existing code must often be rewritten to the domain-specific languages that each framework supports. This rewriting–tedious and error-prone–also requires developers to choose the framework that best optimizes performance given a specific workload. This paper describes Casper, a novel compiler that automatically retargets sequential Java code for execution on Hadoop, a parallel data processing framework that implements the MapReduce paradigm. Given a sequential code fragment, Casper uses verified lifting to infer a high-level summary expressed in our program specification language that is then compiled for execution on Hadoop. We demonstrate that Casper automatically translates Java benchmarks into Hadoop. The translated results execute on average 3.3x faster than the sequential implementations and scale better, as well, to larger datasets.

  13. National high-level waste systems analysis

    International Nuclear Information System (INIS)

    Kristofferson, K.; O'Holleran, T.P.

    1996-01-01

    Previously, no mechanism existed that provided a systematic, interrelated view or national perspective of all high-level waste treatment and storage systems that the US Department of Energy manages. The impacts of budgetary constraints and repository availability on storage and treatment must be assessed against existing and pending negotiated milestones for their impact on the overall HLW system. This assessment can give DOE a complex-wide view of the availability of waste treatment and help project the time required to prepare HLW for disposal. Facilities, throughputs, schedules, and milestones were modeled to ascertain the treatment and storage systems resource requirements at the Hanford Site, Savannah River Site, Idaho National Engineering Laboratory, and West Valley Demonstration Project. The impacts of various treatment system availabilities on schedule and throughput were compared to repository readiness to determine the prudent application of resources. To assess the various impacts, the model was exercised against a number of plausible scenarios as discussed in this paper

  14. International high-level radioactive waste repositories

    International Nuclear Information System (INIS)

    Lin, W.

    1996-01-01

    Although nuclear technologies benefit everyone, the associated nuclear wastes are a widespread and rapidly growing problem. Nuclear power plants are in operation in 25 countries, and are under construction in others. Developing countries are hungry for electricity to promote economic growth; industrialized countries are eager to export nuclear technologies and equipment. These two ingredients, combined with the rapid shrinkage of worldwide fossil fuel reserves, will increase the utilization of nuclear power. All countries utilizing nuclear power produce at least a few tens of tons of spent fuel per year. That spent fuel (and reprocessing products, if any) constitutes high-level nuclear waste. Toxicity, long half-life, and immunity to chemical degradation make such waste an almost permanent threat to human beings. This report discusses the advantages of utilizing repositories for disposal of nuclear wastes

  15. Intergenerational ethics of high level radioactive waste

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Kunihiko [Nagoya Univ., Graduate School of Engineering, Nagoya, Aichi (Japan); Nasu, Akiko; Maruyama, Yoshihiro [Shibaura Inst. of Tech., Tokyo (Japan)

    2003-03-01

    The validity of intergenerational ethics on the geological disposal of high level radioactive waste originating from nuclear power plants was studied. The result of the study on geological disposal technology showed that the current method of disposal can be judged to be scientifically reliable for several hundred years and the radioactivity level will be less than one tenth of the tolerable amount after 1,000 years or more. This implies that the consideration of intergenerational ethics of geological disposal is meaningless. Ethics developed in western society states that the consent of people in the future is necessary if the disposal has influence on them. Moreover, the ethics depends on generally accepted ideas in western society and preconceptions based on racism and sexism. The irrationality becomes clearer by comparing the dangers of the exhaustion of natural resources and pollution from harmful substances in a recycling society. (author)

  16. Management of high level radioactive waste

    International Nuclear Information System (INIS)

    Redon, A.; Mamelle, J.; Chambon, M.

    1977-01-01

    The world wide needs in reprocessing will reach the value of 10.000 t/y of irradiated fuels, in the mid of the 80's. Several countries will have planned, in their nuclear programme, the construction of reprocessing plants with a 1500 t/y capacity, corresponding to 50.000 MWe installed. At such a level, the solidification of the radioactive waste will become imperative. For this reason, all efforts, in France, have been directed towards the realization of industrial plants able of solidifying the fission products as a glassy material. The advantages of this decision, and the reasons for it are presented. The continuing development work, and the conditions and methods of storing the high-level wastes prior to solidification, and of the interim storage (for thermal decay) and the ultimate disposal after solidification are described [fr

  17. Intergenerational ethics of high level radioactive waste

    International Nuclear Information System (INIS)

    Takeda, Kunihiko; Nasu, Akiko; Maruyama, Yoshihiro

    2003-01-01

    The validity of intergenerational ethics on the geological disposal of high level radioactive waste originating from nuclear power plants was studied. The result of the study on geological disposal technology showed that the current method of disposal can be judged to be scientifically reliable for several hundred years and the radioactivity level will be less than one tenth of the tolerable amount after 1,000 years or more. This implies that the consideration of intergenerational ethics of geological disposal is meaningless. Ethics developed in western society states that the consent of people in the future is necessary if the disposal has influence on them. Moreover, the ethics depends on generally accepted ideas in western society and preconceptions based on racism and sexism. The irrationality becomes clearer by comparing the dangers of the exhaustion of natural resources and pollution from harmful substances in a recycling society. (author)

  18. The CMS High Level Trigger System

    CERN Document Server

    Afaq, A; Bauer, G; Biery, K; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Cheung, H; Ciganek, M; Cittolin, S; Dagenhart, W; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Kowalkowski, J; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sexton-Kennedy, E; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.

  19. Ramifications of defining high-level waste

    International Nuclear Information System (INIS)

    Wood, D.E.; Campbell, M.H.; Shupe, M.W.

    1987-01-01

    The Nuclear Regulatory Commission (NRC) is considering rule making to provide a concentration-based definition of high-level waste (HLW) under authority derived from the Nuclear Waste Policy Act (NWPA) of 1982 and the Low Level Waste Policy Amendments Act of 1985. The Department of Energy (DOE), which has the responsibility to dispose of certain kinds of commercial waste, is supporting development of a risk-based classification system by the Oak Ridge National Laboratory to assist in developing and implementing the NRC rule. The system is two dimensional, with the axes based on the phrases highly radioactive and requires permanent isolation in the definition of HLW in the NWPA. Defining HLW will reduce the ambiguity in the present source-based definition by providing concentration limits to establish which materials are to be called HLW. The system allows the possibility of greater-confinement disposal for some wastes which do not require the degree of isolation provided by a repository. The definition of HLW will provide a firm basis for waste processing options which involve partitioning of waste into a high-activity stream for repository disposal, and a low-activity stream for disposal elsewhere. Several possible classification systems have been derived and the characteristics of each are discussed. The Defense High Level Waste Technology Lead Office at DOE - Richland Operations Office, supported by Rockwell Hanford Operations, has coordinated reviews of the ORNL work by a technical peer review group and other DOE offices. The reviews produced several recommendations and identified several issues to be addressed in the NRC rule making. 10 references, 3 figures

  20. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  1. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  2. Engineering-scale vitrification of commercial high-level waste

    International Nuclear Information System (INIS)

    Bonner, W.F.; Bjorklund, W.J.; Hanson, M.S.; Knowlton, D.E.

    1980-04-01

    To date, technology for immobilizing commercial high-level waste (HLW) has been extensively developed, and two major demonstration projects have been completed, the Waste Solidification Engineering Prototypes (WSEP) Program and the Nuclear Waste Vitrification Project (NWVP). The feasibility of radioactive waste solidification was demonstrated in the WSEP program between 1966 and 1970 (McElroy et al. 1972) using simulated power-reactor waste composed of nonradioactive chemicals and HLW from spent, Hanford reactor fuel. Thirty-three engineering-scale canisters of solidified HLW were produced during the operations. In early 79, the NWVP demonstrated the vitrification of HLW from the processing of actual commercial nuclear fuel. This program consisted of two parts, (1) waste preparation and (2) vitrification by spray calcination and in-can melting. This report presents results from the NWVP

  3. Solidification of Savannah River Plant high level waste

    International Nuclear Information System (INIS)

    Maher, R.; Shafranek, L.F.; Kelley, J.A.; Zeyfang, R.W.

    1981-11-01

    Authorization for construction of the Defense Waste Processing Facility (DWPF) is expected in FY 83. The optimum time for stage 2 authorization is about three years later. Detailed design and construction will require approximately five years for stage 1, with stage 2 construction completed about two to three years later. Production of canisters of waste glass would begin in 1988, and the existing backlog of high level waste sludge stored at SRP would be worked off by about the year 2000. Stage 2 operation could begin in 1990. The technology and engineering are ready for construction and eventual operation of the DWPF for immobilizing high level radioactive waste at Savannah River Plant (SRP). Proceeding with this project will provide the public, and the leadership of this country, with a crucial demonstration that a major quantity of existing high level nuclear wastes can be safely and permanently immobilized. Early demonstration will both expedite and facilitate rational decision making on this aspect of the nuclear program. Delay in providing these facilities will result in significant DOE expenditures at SRP for new tanks just for continued temporary storage of wastes, and would probably result in dissipation of the intellectual and planning momentum that has built up in developing the project

  4. High level cognitive information processing in neural networks

    Science.gov (United States)

    Barnden, John A.; Fields, Christopher A.

    1992-01-01

    Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.

  5. Materials Science of High-Level Nuclear Waste Immobilization

    International Nuclear Information System (INIS)

    Weber, William J.; Navrotsky, Alexandra; Stefanovsky, S. V.; Vance, E. R.; Vernaz, Etienne Y.

    2009-01-01

    With the increasing demand for the development of more nuclear power comes the responsibility to address the technical challenges of immobilizing high-level nuclear wastes in stable solid forms for interim storage or disposition in geologic repositories. The immobilization of high-level nuclear wastes has been an active area of research and development for over 50 years. Borosilicate glasses and complex ceramic composites have been developed to meet many technical challenges and current needs, although regulatory issues, which vary widely from country to country, have yet to be resolved. Cooperative international programs to develop advanced proliferation-resistant nuclear technologies to close the nuclear fuel cycle and increase the efficiency of nuclear energy production might create new separation waste streams that could demand new concepts and materials for nuclear waste immobilization. This article reviews the current state-of-the-art understanding regarding the materials science of glasses and ceramics for the immobilization of high-level nuclear waste and excess nuclear materials and discusses approaches to address new waste streams

  6. High-level waste melter alternatives assessment report

    Energy Technology Data Exchange (ETDEWEB)

    Calmus, R.B.

    1995-02-01

    This document describes the Tank Waste Remediation System (TWRS) High-Level Waste (HLW) Program`s (hereafter referred to as HLW Program) Melter Candidate Assessment Activity performed in fiscal year (FY) 1994. The mission of the TWRS Program is to store, treat, and immobilize highly radioactive Hanford Site waste (current and future tank waste and encapsulated strontium and cesium isotopic sources) in an environmentally sound, safe, and cost-effective manner. The goal of the HLW Program is to immobilize the HLW fraction of pretreated tank waste into a vitrified product suitable for interim onsite storage and eventual offsite disposal at a geologic repository. Preparation of the encapsulated strontium and cesium isotopic sources for final disposal is also included in the HLW Program. As a result of trade studies performed in 1992 and 1993, processes planned for pretreatment of tank wastes were modified substantially because of increasing estimates of the quantity of high-level and transuranic tank waste remaining after pretreatment. This resulted in substantial increases in needed vitrification plant capacity compared to the capacity of original Hanford Waste Vitrification Plant (HWVP). The required capacity has not been finalized, but is expected to be four to eight times that of the HWVP design. The increased capacity requirements for the HLW vitrification plant`s melter prompted the assessment of candidate high-capacity HLW melter technologies to determine the most viable candidates and the required development and testing (D and T) focus required to select the Hanford Site HLW vitrification plant melter system. An assessment process was developed in early 1994. This document describes the assessment team, roles of team members, the phased assessment process and results, resulting recommendations, and the implementation strategy.

  7. Programs Lucky and LuckyC - 3D parallel transport codes for the multi-group transport equation solution for XYZ geometry by Pm Sn method

    International Nuclear Information System (INIS)

    Moriakov, A.; Vasyukhno, V.; Netecha, M.; Khacheresov, G.

    2003-01-01

    Powerful supercomputers are available today. MBC-1000M is one of Russian supercomputers that may be used by distant way access. Programs LUCKY and LUCKY C were created to work for multi-processors systems. These programs have algorithms created especially for these computers and used MPI (message passing interface) service for exchanges between processors. LUCKY may resolved shielding tasks by multigroup discreet ordinate method. LUCKY C may resolve critical tasks by same method. Only XYZ orthogonal geometry is available. Under little space steps to approximate discreet operator this geometry may be used as universal one to describe complex geometrical structures. Cross section libraries are used up to P8 approximation by Legendre polynomials for nuclear data in GIT format. Programming language is Fortran-90. 'Vector' processors may be used that lets get a time profit up to 30 times. But unfortunately MBC-1000M has not these processors. Nevertheless sufficient value for efficiency of parallel calculations was obtained under 'space' (LUCKY) and 'space and energy' (LUCKY C ) paralleling. AUTOCAD program is used to control geometry after a treatment of input data. Programs have powerful geometry module, it is a beautiful tool to achieve any geometry. Output results may be processed by graphic programs on personal computer. (authors)

  8. Managing the nation's commercial high-level radioactive waste

    International Nuclear Information System (INIS)

    1985-03-01

    This report presents the findings and conclusions of OTA's analysis of Federal policy for the management of commercial high-level radioactive waste. It is intended to contribute to the implementation of Nuclear Waste Policy Act of 1982 (NWPA). The major conclusion of that review is that NWPA provides sufficient authority for developing and operating a waste management system based on disposal in geologic repositories. Substantial new authority for other facilities will not be required unless major unexpected problems with geologic disposal are encountered. OTA also concludes that DOE's Draft Mission Plan published in 1984 falls short of its potential for enhancing the credibility and acceptability of the waste management program

  9. Development and evaluation of candidate high-level waste forms

    International Nuclear Information System (INIS)

    Bernadzikowski, T.A.

    1981-01-01

    Some seventeen candidate waste forms have been investigated under US Department of Energy programs as potential media for the immobilization and geologic disposal of the high-level radioactive wastes (HLW) resulting from chemical processing of nuclear reactor fuels and targets. Two of these HLW forms were selected at the end of fiscal year (FY) 1981 for intensive development if FY 1982 to 1983. Borosilicate glass was continued as the reference form. A crystalline ceramic waste form, SYNROC, was selected for further product formulation and process development as the alternative to borosilicate glass. This paper describes the bases on which this decision was made

  10. High-level nuclear waste disposal: Ethical considerations

    International Nuclear Information System (INIS)

    Maxey, M.N.

    1985-01-01

    Popular skepticism about, and moral objections to, recent legislation providing for the management and permanent disposal of high-level radioactive wastes have derived their credibility from two major sources: government procrastination in enacting waste disposal program, reinforcing public perceptions of their unprecedented danger and the inflated rhetoric and pretensions to professional omnicompetence of influential scientists with nuclear expertise. Ethical considerations not only can but must provide a mediating framework for the resolution of such a polarized political controversy. Implicit in moral objections to proposals for permanent nuclear waste disposal are concerns about three ethical principles: fairness to individuals, equitable protection among diverse social groups, and informed consent through due process and participation

  11. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  12. Regulatable elements in the high-level waste management program

    International Nuclear Information System (INIS)

    Oakley, D.

    1979-01-01

    Regulatable elements of a deep geological nuclear waste isolation system are those characteristics of a candidate system which need to be specified to achieve control of its performance. This report identifies the regulatable elements with respect to waste form, repository design, site suitability, and the modeling and decision analysis processes. Regulatable elements in each section are listed and described briefly as they affect the short-term and long-term performance of a deep geological repository

  13. Application of high level programs in a controls environment

    International Nuclear Information System (INIS)

    Kost, C.J.; Mouat, M.; Dohan, D.A.

    1987-01-01

    Highly interactive display utilities, operating on a VAX/VMS computer system, have been usefully interfaced to the controls environment of the TRIUMF cyclotron. Machine data is acquired by a VAX-CAMAC interface, an is passed to these utilities in an efficient manner by memory mapping to global sections for on-line manipulation. The data can also be readily analyzed off-line by operators with the user-friendly command driven utilities OPDATA and PLOTDATA which permit the user to obtain graphics output on a variety of terminal and hardcopy devices using device independent metafiles. Sample applications show the usefulness of these utilities for a wide range of tasks, such as real-time simulation of trim-coil tuning on the beam phase history, and semi-on-line analysis of radial probe data

  14. High level waste fixation in cermet form

    International Nuclear Information System (INIS)

    Kobisk, E.H.; Aaron, W.S.; Quinby, T.C.; Ramey, D.W.

    1981-01-01

    Commercial and defense high level waste fixation in cermet form is being studied by personnel of the Isotopes Research Materials Laboratory, Solid State Division (ORNL). As a corollary to earlier research and development in forming high density ceramic and cermet rods, disks, and other shapes using separated isotopes, similar chemical and physical processing methods have been applied to synthetic and real waste fixation. Generally, experimental products resulting from this approach have shown physical and chemical characteristics which are deemed suitable for long-term storage, shipping, corrosive environments, high temperature environments, high waste loading, decay heat dissipation, and radiation damage. Although leach tests are not conclusive, what little comparative data are available show cermet to withstand hydrothermal conditions in water and brine solutions. The Soxhlet leach test, using radioactive cesium as a tracer, showed that leaching of cermet was about X100 less than that of 78 to 68 glass. Using essentially uncooled, untreated waste, cermet fixation was found to accommodate up to 75% waste loading and yet, because of its high thermal conductivity, a monolith of 0.6 m diameter and 3.3 m-length would have only a maximum centerline temperature of 29 K above the ambient value

  15. Tracking at High Level Trigger in CMS

    CERN Document Server

    Tosi, Mia

    2016-01-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabili- ties of the experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a stream- lined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable out- put rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and ...

  16. Performance of the CMS High Level Trigger

    CERN Document Server

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  17. Vitrification of high-level liquid wastes

    International Nuclear Information System (INIS)

    Varani, J.L.; Petraitis, E.J.; Vazquez, Antonio.

    1987-01-01

    High-level radioactive liquid wastes produced in the fuel elements reprocessing require, for their disposal, a preliminary treatment by which, through a series of engineering barriers, the dispersion into the biosphere is delayed by 10 000 years. Four groups of compounds are distinguished among a great variety of final products and methods of elaboration. From these, the borosilicate glasses were chosen. Vitrification experiences were made at a laboratory scale with simulated radioactive wastes, employing different compositions of borosilicate glass. The installations are described. A series of tests were carried out on four basic formulae using always the same methodology, consisting of a dry mixture of the vitreous matrix's products and a dry simulated mixture. Several quality tests of the glasses were made 1: Behaviour in leaching following the DIN 12 111 standard; 2: Mechanical resistance; parameters related with the facility of the different glasses for increasing their surface were studied; 3: Degree of devitrification: it is shown that devitrification turns the glasses containing radioactive wastes easily leachable. From all the glasses tested, the composition SiO 2 , Al 2 O 3 , B 2 O 3 , Na 2 O, CaO shows the best retention characteristics. (M.E.L.) [es

  18. Ocean disposal of high level radioactive waste

    International Nuclear Information System (INIS)

    1983-01-01

    This study confirms, subject to limitations of current knowledge, the engineering feasibility of free fall penetrators for High Level Radioactive Waste disposal in deep ocean seabed sediments. Restricted sediment property information is presently the principal bar to an unqualified statement of feasibility. A 10m minimum embedment and a 500 year engineered barrier waste containment life are identified as appropriate basic penetrator design criteria at this stage. A range of designs are considered in which the length, weight and cross section of the penetrator are varied. Penetrators from 3m to 20m long and 2t to 100t in weight constructed of material types and thicknesses to give a 500 year containment life are evaluated. The report concludes that the greatest degree of confidence is associated with performance predictions for 75 to 200 mm thick soft iron and welded joints. A range of lengths and capacities from a 3m long single waste canister penetrator to a 20m long 12 canister design are identified as meriting further study. Estimated embedment depths for this range of penetrator designs lie between 12m and 90m. Alternative manufacture, transport and launch operations are assessed and recommendations are made. (author)

  19. Vitrification of high level wastes in France

    International Nuclear Information System (INIS)

    Sombret, C.

    1984-02-01

    A brief historical background of the research and development work conducted in France over 25 years is first presented. Then, the papers deals with the vitrification at (1) the UP1 reprocessing plant (Marcoule) and (2) the UP2 and UP3 reprocessing plants (La Hague). 1) The properties of glass required for high-level radioactive waste vitrification are recalled. The vitrification process and facility of Marcoule are presented. (2) The average characteristics (chemical composition, activity) of LWR fission product solution are given. The glass formulations developed to solidify LWR waste solution must meet the same requirements as those used in the UP1 facility at Marcoule. Three important aspects must be considered with respect to the glass fabrication process: corrosiveness of the molten glass with regard to metals, viscosity of the molten glass, and, volatization during glass fabrication. The glass properties required in view of interim storage and long-term disposal are then largely developed. Two identical vitrification facilities are planned for the site: T7, to process the UP2 throughput, and T7 for the UP3 plant. A prototype unit was built and operated at Marcoule

  20. CMS High Level Trigger Timing Measurements

    International Nuclear Information System (INIS)

    Richardson, Clint

    2015-01-01

    The two-level trigger system employed by CMS consists of the Level 1 (L1) Trigger, which is implemented using custom-built electronics, and the High Level Trigger (HLT), a farm of commercial CPUs running a streamlined version of the offline CMS reconstruction software. The operational L1 output rate of 100 kHz, together with the number of CPUs in the HLT farm, imposes a fundamental constraint on the amount of time available for the HLT to process events. Exceeding this limit impacts the experiment's ability to collect data efficiently. Hence, there is a critical need to characterize the performance of the HLT farm as well as the algorithms run prior to start up in order to ensure optimal data taking. Additional complications arise from the fact that the HLT farm consists of multiple generations of hardware and there can be subtleties in machine performance. We present our methods of measuring the timing performance of the CMS HLT, including the challenges of making such measurements. Results for the performance of various Intel Xeon architectures from 2009-2014 and different data taking scenarios are also presented. (paper)

  1. Decontamination of high-level waste canisters

    International Nuclear Information System (INIS)

    Nesbitt, J.F.; Slate, S.C.; Fetrow, L.K.

    1980-12-01

    This report presents evaluations of several methods for the in-process decontamination of metallic canisters containing any one of a number of solidified high-level waste (HLW) forms. The use of steam-water, steam, abrasive blasting, electropolishing, liquid honing, vibratory finishing and soaking have been tested or evaluated as potential techniques to decontaminate the outer surfaces of HLW canisters. Either these techniques have been tested or available literature has been examined to assess their applicability to the decontamination of HLW canisters. Electropolishing has been found to be the most thorough method to remove radionuclides and other foreign material that may be deposited on or in the outer surface of a canister during any of the HLW processes. Steam or steam-water spraying techniques may be adequate for some applications but fail to remove all contaminated forms that could be present in some of the HLW processes. Liquid honing and abrasive blasting remove contamination and foreign material very quickly and effectively from small areas and components although these blasting techniques tend to disperse the material removed from the cleaned surfaces. Vibratory finishing is very capable of removing the bulk of contamination and foreign matter from a variety of materials. However, special vibratory finishing equipment would have to be designed and adapted for a remote process. Soaking techniques take long periods of time and may not remove all of the smearable contamination. If soaking involves pickling baths that use corrosive agents, these agents may cause erosion of grain boundaries that results in rough surfaces

  2. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  3. High-level waste melter alternatives assessment report

    International Nuclear Information System (INIS)

    Calmus, R.B.

    1995-02-01

    This document describes the Tank Waste Remediation System (TWRS) High-Level Waste (HLW) Program's (hereafter referred to as HLW Program) Melter Candidate Assessment Activity performed in fiscal year (FY) 1994. The mission of the TWRS Program is to store, treat, and immobilize highly radioactive Hanford Site waste (current and future tank waste and encapsulated strontium and cesium isotopic sources) in an environmentally sound, safe, and cost-effective manner. The goal of the HLW Program is to immobilize the HLW fraction of pretreated tank waste into a vitrified product suitable for interim onsite storage and eventual offsite disposal at a geologic repository. Preparation of the encapsulated strontium and cesium isotopic sources for final disposal is also included in the HLW Program. As a result of trade studies performed in 1992 and 1993, processes planned for pretreatment of tank wastes were modified substantially because of increasing estimates of the quantity of high-level and transuranic tank waste remaining after pretreatment. This resulted in substantial increases in needed vitrification plant capacity compared to the capacity of original Hanford Waste Vitrification Plant (HWVP). The required capacity has not been finalized, but is expected to be four to eight times that of the HWVP design. The increased capacity requirements for the HLW vitrification plant's melter prompted the assessment of candidate high-capacity HLW melter technologies to determine the most viable candidates and the required development and testing (D and T) focus required to select the Hanford Site HLW vitrification plant melter system. An assessment process was developed in early 1994. This document describes the assessment team, roles of team members, the phased assessment process and results, resulting recommendations, and the implementation strategy

  4. User-friendly parallelization of GAUDI applications with Python

    International Nuclear Information System (INIS)

    Mato, Pere; Smith, Eoin

    2010-01-01

    GAUDI is a software framework in C++ used to build event data processing applications using a set of standard components with well-defined interfaces. Simulation, high-level trigger, reconstruction, and analysis programs used by several experiments are developed using GAUDI. These applications can be configured and driven by simple Python scripts. Given the fact that a considerable amount of existing software has been developed using serial methodology, and has existed in some cases for many years, implementation of parallelisation techniques at the framework level may offer a way of exploiting current multi-core technologies to maximize performance and reduce latencies without re-writing thousands/millions of lines of code. In the solution we have developed, the parallelization techniques are introduced to the high level Python scripts which configure and drive the applications, such that the core C++ application code requires no modification, and that end users need make only minimal changes to their scripts. The developed solution leverages from existing generic Python modules that support parallel processing. Naturally, the parallel version of a given program should produce results consistent with its serial execution. The evaluation of several prototypes incorporating various parallelization techniques are presented and discussed.

  5. User-friendly parallelization of GAUDI applications with Python

    Energy Technology Data Exchange (ETDEWEB)

    Mato, Pere; Smith, Eoin, E-mail: pere.mato@cern.c [PH Department, CERN, 1211 Geneva 23 (Switzerland)

    2010-04-01

    GAUDI is a software framework in C++ used to build event data processing applications using a set of standard components with well-defined interfaces. Simulation, high-level trigger, reconstruction, and analysis programs used by several experiments are developed using GAUDI. These applications can be configured and driven by simple Python scripts. Given the fact that a considerable amount of existing software has been developed using serial methodology, and has existed in some cases for many years, implementation of parallelisation techniques at the framework level may offer a way of exploiting current multi-core technologies to maximize performance and reduce latencies without re-writing thousands/millions of lines of code. In the solution we have developed, the parallelization techniques are introduced to the high level Python scripts which configure and drive the applications, such that the core C++ application code requires no modification, and that end users need make only minimal changes to their scripts. The developed solution leverages from existing generic Python modules that support parallel processing. Naturally, the parallel version of a given program should produce results consistent with its serial execution. The evaluation of several prototypes incorporating various parallelization techniques are presented and discussed.

  6. DEFENSE HIGH LEVEL WASTE GLASS DEGRADATION

    International Nuclear Information System (INIS)

    Ebert, W.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the analyses that were done to develop models for radionuclide release from high-level waste (HLW) glass dissolution that can be integrated into performance assessment (PA) calculations conducted to support site recommendation and license application for the Yucca Mountain site. This report was developed in accordance with the ''Technical Work Plan for Waste Form Degradation Process Model Report for SR'' (CRWMS M andO 2000a). It specifically addresses the item, ''Defense High Level Waste Glass Degradation'', of the product technical work plan. The AP-3.15Q Attachment 1 screening criteria determines the importance for its intended use of the HLW glass model derived herein to be in the category ''Other Factors for the Postclosure Safety Case-Waste Form Performance'', and thus indicates that this factor does not contribute significantly to the postclosure safety strategy. Because the release of radionuclides from the glass will depend on the prior dissolution of the glass, the dissolution rate of the glass imposes an upper bound on the radionuclide release rate. The approach taken to provide a bound for the radionuclide release is to develop models that can be used to calculate the dissolution rate of waste glass when contacted by water in the disposal site. The release rate of a particular radionuclide can then be calculated by multiplying the glass dissolution rate by the mass fraction of that radionuclide in the glass and by the surface area of glass contacted by water. The scope includes consideration of the three modes by which water may contact waste glass in the disposal system: contact by humid air, dripping water, and immersion. The models for glass dissolution under these contact modes are all based on the rate expression for aqueous dissolution of borosilicate glasses. The mechanism and rate expression for aqueous dissolution are adequately understood; the analyses in this AMR were conducted to

  7. Algorithms for computational fluid dynamics n parallel processors

    International Nuclear Information System (INIS)

    Van de Velde, E.F.

    1986-01-01

    A study of parallel algorithms for the numerical solution of partial differential equations arising in computational fluid dynamics is presented. The actual implementation on parallel processors of shared and nonshared memory design is discussed. The performance of these algorithms is analyzed in terms of machine efficiency, communication time, bottlenecks and software development costs. For elliptic equations, a parallel preconditioned conjugate gradient method is described, which has been used to solve pressure equations discretized with high order finite elements on irregular grids. A parallel full multigrid method and a parallel fast Poisson solver are also presented. Hyperbolic conservation laws were discretized with parallel versions of finite difference methods like the Lax-Wendroff scheme and with the Random Choice method. Techniques are developed for comparing the behavior of an algorithm on different architectures as a function of problem size and local computational effort. Effective use of these advanced architecture machines requires the use of machine dependent programming. It is shown that the portability problems can be minimized by introducing high level operations on vectors and matrices structured into program libraries

  8. High-level radioactive waste management

    International Nuclear Information System (INIS)

    Schneider, K.J.; Liikala, R.C.

    1974-01-01

    High-level radioactive waste in the U.S. will be converted to an encapsulated solid and shipped to a Federal repository for retrievable storage for extended periods. Meanwhile the development of concepts for ultimate disposal of the waste which the Federal Government would manage is being actively pursued. A number of promising concepts have been proposed, for which there is high confidence that one or more will be suitable for long-term, ultimate disposal. Initial evaluations of technical (or theoretical) feasibility for the various waste disposal concepts show that in the broad category, (i.e., geologic, seabed, ice sheet, extraterrestrial, and transmutation) all meet the criteria for judging feasibility, though a few alternatives within these categories do not. Preliminary cost estimates show that, although many millions of dollars may be required, the cost for even the most exotic concepts is small relative to the total cost of electric power generation. For example, the cost estimates for terrestrial disposal concepts are less than 1 percent of the total generating costs. The cost for actinide transmutation is estimated at around 1 percent of generation costs, while actinide element disposal in space is less than 5 percent of generating costs. Thus neither technical feasibility nor cost seems to be a no-go factor in selecting a waste management system. The seabed, ice sheet, and space disposal concepts face international policy constraints. The information being developed currently in safety, environmental concern, and public response will be important factors in determining which concepts appear most promising for further development

  9. Parallel Simulation of Loosely Timed SystemC/TLM Programs: Challenges Raised by an Industrial Case Study

    Directory of Open Access Journals (Sweden)

    Denis Becker

    2016-05-01

    Full Text Available Transaction level models of systems-on-chip in SystemC are commonly used in the industry to provide an early simulation environment. The SystemC standard imposes coroutine semantics for the scheduling of simulated processes, to ensure determinism and reproducibility of simulations. However, because of this, sequential implementations have, for a long time, been the only option available, and still now the reference implementation is sequential. With the increasing size and complexity of models, and the multiplication of computation cores on recent machines, the parallelization of SystemC simulations is a major research concern. There have been several proposals for SystemC parallelization, but most of them are limited to cycle-accurate models. In this paper we focus on loosely timed models, which are commonly used in the industry. We present an industrial context and show that, unfortunately, most of the existing approaches for SystemC parallelization can fundamentally not apply in this context. We support this claim with a set of measurements performed on a platform used in production at STMicroelectronics. This paper surveys existing techniques, presents a visualization and profiling tool and identifies unsolved challenges in the parallelization of SystemC models at transaction level.

  10. Spent Fuel and High-Level Radioactive Waste Transportation Report

    International Nuclear Information System (INIS)

    1992-03-01

    This publication is intended to provide its readers with an introduction to the issues surrounding the subject of transportation of spent nuclear fuel and high-level radioactive waste, especially as those issues impact the southern region of the United States. It was originally issued by SSEB in July 1987 as the Spent Nuclear Fuel and High-Level Radioactive Waste Transportation Primer, a document patterned on work performed by the Western Interstate Energy Board and designed as a ''comprehensive overview of the issues.'' This work differs from that earlier effort in that it is designed for the educated layman with little or no background in nuclear waste Issues. In addition. this document is not a comprehensive examination of nuclear waste issues but should instead serve as a general introduction to the subject. Owing to changes in the nuclear waste management system, program activities by the US Department of Energy and other federal agencies and developing technologies, much of this information is dated quickly. While this report uses the most recent data available, readers should keep in mind that some of the material is subject to rapid change. SSEB plans periodic updates in the future to account for changes in the program. Replacement pages will be supplied to all parties in receipt of this publication provided they remain on the SSEB mailing list

  11. Spent fuel and high-level radioactive waste transportation report

    Energy Technology Data Exchange (ETDEWEB)

    1989-11-01

    This publication is intended to provide its readers with an introduction to the issues surrounding the subject of transportation of spent nuclear fuel and high-level radioactive waste, especially as those issues impact the southern region of the United States. It was originally issued by the Southern States Energy Board (SSEB) in July 1987 as the Spent Nuclear Fuel and High-Level Radioactive Waste Transportation Primer, a document patterned on work performed by the Western Interstate Energy Board and designed as a ``comprehensive overview of the issues.`` This work differs from that earlier effort in that it is designed for the educated layman with little or no background in nuclear waste issues. In addition, this document is not a comprehensive examination of nuclear waste issues but should instead serve as a general introduction to the subject. Owing to changes in the nuclear waste management system, program activities by the US Department of Energy and other federal agencies and developing technologies, much of this information is dated quickly. While this report uses the most recent data available, readers should keep in mind that some of the material is subject to rapid change. SSEB plans periodic updates in the future to account for changes in the program. Replacement pages sew be supplied to all parties in receipt of this publication provided they remain on the SSEB mailing list.

  12. Spent fuel and high-level radioactive waste transportation report

    International Nuclear Information System (INIS)

    1989-11-01

    This publication is intended to provide its readers with an introduction to the issues surrounding the subject of transportation of spent nuclear fuel and high-level radioactive waste, especially as those issues impact the southern region of the United States. It was originally issued by the Southern States Energy Board (SSEB) in July 1987 as the Spent Nuclear Fuel and High-Level Radioactive Waste Transportation Primer, a document patterned on work performed by the Western Interstate Energy Board and designed as a ''comprehensive overview of the issues.'' This work differs from that earlier effort in that it is designed for the educated layman with little or no background in nuclear waste issues. In addition, this document is not a comprehensive examination of nuclear waste issues but should instead serve as a general introduction to the subject. Owing to changes in the nuclear waste management system, program activities by the US Department of Energy and other federal agencies and developing technologies, much of this information is dated quickly. While this report uses the most recent data available, readers should keep in mind that some of the material is subject to rapid change. SSEB plans periodic updates in the future to account for changes in the program. Replacement pages sew be supplied to all parties in receipt of this publication provided they remain on the SSEB mailing list

  13. Spent fuel and high-level radioactive waste transportation report

    International Nuclear Information System (INIS)

    1990-11-01

    This publication is intended to provide its readers with an introduction to the issues surrounding the subject of transportation of spent nuclear fuel and high-level radioactive waste, especially as those issues impact the southern region of the United States. It was originally issued by the Southern States Energy Board (SSEB) in July 1987 as the Spent Nuclear Fuel and High-Level Radioactive Waste Transportation Primer, a document patterned on work performed by the Western Interstate Energy Board and designed as a ''comprehensive overview of the issues.'' This work differs from that earlier effort in that it is designed for the educated layman with little or no background in nuclear waste issues. In addition, this document is not a comprehensive examination of nuclear waste issues but should instead serve as a general introduction to the subject. Owing to changes in the nuclear waste management system, program activities by the US Department of Energy and other federal agencies and developing technologies, much of this information is dated quickly. While this report uses the most recent data available, readers should keep in mind that some of the material is subject to rapid change. SSEB plans periodic updates in the future to account for changes in the program. Replacement pages will be supplied to all parties in receipt of this publication provided they remain on the SSEB mailing list

  14. Managing the nation's commercial high-level radioactive waste

    International Nuclear Information System (INIS)

    Anon.

    1985-01-01

    This study presents the findings and conclusions of OTA's analysis of Federal policy for the management of commercial high-level radioactive waste. Broad in scope and balanced in approach, its coverage extends from technological and organizational questions to political ramifications...the environmental impact of building repositories...and even dealing with Indian tribes affected by repository site selection and development. Emphasis is on workable strategies for implementing the National Waste Policy Act of 1982, including a mission plan for the program...a monitored retrievable storage proposal...and a report on mechanisms for financing and managing the program. Nine appendicies are included. They furnish additional data on such topics as policymaking, history, and the system issues resolved in NWPA

  15. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  16. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Science.gov (United States)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  17. Final technical position on documentation of computer codes for high-level waste management

    International Nuclear Information System (INIS)

    Silling, S.A.

    1983-06-01

    Guidance is given for the content of documentation of computer codes which are used in support of a license application for high-level waste disposal. The guidelines cover theoretical basis, programming, and instructions for use of the code

  18. Embedding Topical Elements of Parallel Programming, Computer Graphics, and Artificial Intelligence across the Undergraduate CS Required Courses

    Directory of Open Access Journals (Sweden)

    James Wolfer

    2015-02-01

    Full Text Available Traditionally, topics such as parallel computing, computer graphics, and artificial intelligence have been taught as stand-alone courses in the computing curriculum. Often these are elective courses, limiting the material to the subset of students choosing to take the course. Recently there has been movement to distribute topics across the curriculum in order to ensure that all graduates have been exposed to concepts such as parallel computing. Previous work described an attempt to systematically weave a tapestry of topics into the undergraduate computing curriculum. This paper reviews that work and expands it with representative examples of assignments, demonstrations, and results as well as describing how the tools and examples deployed for these classes have a residual effect on classes such as Comptuer Literacy.

  19. High-level waste characterization at West Valley: Progress report for the period 1982-1985

    International Nuclear Information System (INIS)

    Rykken, L.E.

    1986-01-01

    This is a report on the work that was carried out at West Valley under the Waste Characterization Program. This Program covered a number of tasks in support of the design of facilities for the pretreatment and final encapsulation of the high level waste stored at West Valley. In particular, necessary physical, chemical, and radiological characterization of high-level reprocessing waste stored in two vaulted underground tanks was carried out over the period 1982 to 1985. 21 refs., 77 figs., 28 tabs

  20. Managing commercial high-level radioactive waste

    International Nuclear Information System (INIS)

    1983-01-01

    The article is a summary of issues raised during US Congress deliberations on nuclear waste policy legislation. It is suggested that, if history is not to repeat itself, and the current stalemate on nuclear waste is not to continue, a comprehensive policy is needed that addresses the near-term problems of interim storage as part of an explicit and credible program for dealing with the longer term problem of developing a final isolation system. Such a policy must: 1) adequately address the concerns and win the support of all the major interested parties, and 2) adopt a conservative technical and institutional approach - one that places high priority on avoiding the problems that have repeatedly beset the program in the past. It is concluded that a broadly supported comprehensive policy would contain three major elements, each designed to address one of the key questions concerning Federal credibility: commitment in law to the goals of a comprehensive policy; credible institutional mechanisms for meeting goals; and credible measures for addressing the specific concerns of the states and the various publics. Such a policy is described in detail. (Auth.)

  1. Conceptual design of the Virtual Engineering System for High Level Radioactive Waste Geological Disposal

    International Nuclear Information System (INIS)

    1999-06-01

    The Virtual Engineering System for the High Level Radioactive Waste Geological Disposal (hereafter the VE) adopts such computer science technologies as advanced numerical simulation technology with special emphasis upon computer graphics, massive parallel computing, high speed networking, knowledge engineering, database technology to virtually construct the natural and the part of social environment of disposal site in syberspace to realize the disposal OS as its final target. The principle of tile VE is to provide for a firm business standpoint after The 2000 Report by JNC and supply decision support system which promotes various evaluations needed to be done from the year of 2000 to the licensing application for disposal to the government. The VE conceptual design was performed in the year of 1998. The functions of the VE are derived from the analysis of work scope of implementing organization in each step of geological waste disposal: the VE functions need the safety performance assessment, individual process analysis, facility designing, cost evaluation, site surveillance, research and development, public acceptance. Then the above functions are materialized by integrating such individual system as geology database, groundwater database, safety performance assessment system, coupled phenomena analysis system, decision support system, cost evaluation system, and public acceptance system. The integration method of the systems was studied. The concept of the integration of simulators has also been studied from the view point of CAPASA program. Parallel computing, networking, and computer graphic for high speed massive scientific calculation were studied in detail as the element technology to achieve the VE. Based on studies stated above, the concept of the waste disposal project and subjects that arise from 1999 to licensing application are decided. (author)

  2. The Software Architecture of the LHCb High Level Trigger

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to previous experiments at hadron colliders like for example CDF or D0, the bulk of the LHCb trigger is implemented in software and deployed on a farm of 20k parallel processing nodes. This system, called the High Level Trigger (HLT) is responsible for reducing the rate from the maximum at which the detector can be read out, 1.1 MHz, to the 3 kHz which can be processed offline,and has 20 ms in which to process and accept/reject each event. In order to minimize systematic uncertainties, the HLT was designed from the outset to reuse the offline reconstruction and selection code, and is based around multiple independent and redunda...

  3. Conceptual design of the virtual engineering system for high level radioactive waste geological disposal

    International Nuclear Information System (INIS)

    2000-02-01

    The role of Virtual Engineering System for High Level Radioactive Waste Geological Disposal (hereafter the VES) is to accumulate and unify the results of research and development which JNC had been carried out for the completion of the second progress report on a computer system. The purpose and functions of VES with considering the long-term plan for geological disposal in Japan was studied. The analysis between geological environment assessment, safety performance assessment, and engineering technology had not been integrated mutually in the conventional study. The iterative analysis performed by VES makes it possible to analyze natural barrier and engineering barrier more quantitatively for obtaining safety margin and rationalization of the design of a waste repository. We have examined the system functions to achieve the above purpose of VES. Next, conceptual design for codes, databases, and utilities that consist of VES were performed by examining their purpose and functions. The conceptual design of geological environment assessment system, safety performance assessment system, waste repository element database, economical assessment system, investigation support system, quality assurance system, and visualization system are preformed. The whole system configuration, examination of suitable configuration of hardware and software, examination of system implementation, the confirmation of parallel calculation technology, the conceptual design of platform, the development of demonstration program of platform are performed. Based upon studies stated above, the VES development plan including prototype development during the period of selection of the site candidate was studied. The concept of VES was build based on the examination stated above. (author)

  4. On risk assessment of high level radioactive waste disposal

    International Nuclear Information System (INIS)

    Smith, C.F.; Kastenberg, W.E.

    1976-01-01

    One of the major concerns with the continued growth of the nuclear power industry is the production of the high level radioactive wastes. The risks associated with the disposal of these wastes derives from the potential for release of radioactive materials into the environment. The development of a methodology for risk analysis is carried out. The methodology suggested involves the probabilistic analysis of a general accident consequence distribution. In this analysis, the frequency aspect of the distribution is treated separately from the normalized probability function. In the final stage of the analysis, the frequency and probability characteristics of the distribution are recombined to provide an estimate of the risk. The characterization of the radioactive source term is accomplished using the ORIGEN computer code. Calculations are carried out for various reactor types and fuel cycles, and the overall waste hazard for a projected 35 year nuclear power program is determined. An index of relative nuclide hazard appropriate to problems involving the management of high level radioactive wastes is developed. As an illustration of the methodology, risk analyses are made for two proposed methods for waste management: extraterrestrial disposal and interim surface storage. The results of these analyses indicate that, within the assumptions used, the risks of these management schemes are small compared with natural background radiation doses. (Auth.)

  5. High level waste forms: glass marbles and thermal spray coatings

    International Nuclear Information System (INIS)

    Treat, R.L.; Oma, K.H.; Slate, S.C.

    1982-01-01

    A process that converts high-level waste to glass marbles and then coats the marbles has been developed at Pacific Northwest Laboratory (PNL) under sponsorship of the US Department of Energy. The process consists of a joule-heated glass melter, a marble-making device based on a patent issued to Corning Glass Works, and a coating system that includes a plasma spray coater and a marble tumbler. The process was developed under the Alternative Waste Forms Program which strived to improve upon monolithic glass for immobilizing high-level wastes. Coated glass marbles were found to be more leach-resistant, and the marbles, before coating were found to be very homogeneous, highly impact resistant, and conductive to encapsulation in a metal matric for improved heat transfer and containment. Marbles are also ideally suited for quality assurance and recycling. However, the marble process is more complex, and marbles require a larger number of canisters for waste containment and have a higher surface area than do glass monoliths

  6. Risk assessments for the disposal of high level radioactive wastes

    International Nuclear Information System (INIS)

    Smith, C.F.

    1975-01-01

    The risks associated with the disposal of high level wastes derive from the potential for release of radioactive materials into the environment. The assessment of these risks requires a methodology for risk analysis, an identification of the radioactive sources, and a method by which to express the relative hazard of the various radionuclides that comprise the high level waste. The development of a methodology for risk analysis is carried out after a review of previous work in the area of probabilistic risk assessment. The methodology suggested involves the probabilistic analysis of a general accident consequence distribution. In this analysis, the frequency aspect of the distribution is treated separately from the normalized probability function. At the final stage of the analysis, the frequency and probability characteristics of the distribution are recombined to provide an estimate of the risk. The characterization of the radioactive source term is accomplished using the ORIGEN computer code. Calculations are carried out for various reactor types and fuel cycles, and the overall waste hazard for a projected thirty-five year nuclear power program is determined

  7. High-Level Waste Systems Plan. Revision 7 (U)

    International Nuclear Information System (INIS)

    Brooke, J.N.; Gregory, M.V.; Paul, P.; Taylor, G.; Wise, F.E.; Davis, N.R.; Wells, M.N.

    1996-10-01

    This revision of the High-Level Waste (HLW) System Plan aligns SRS HLW program planning with the DOE Savannah River (DOE-SR) Ten Year Plan (QC-96-0005, Draft 8/6), which was issued in July 1996. The objective of the Ten Year Plan is to complete cleanup at most nuclear sites within the next ten years. The two key principles of the Ten Year Plan are to accelerate the reduction of the most urgent risks to human health and the environment and to reduce mortgage costs. Accordingly, this System Plan describes the HLW program that will remove HLW from all 24 old-style tanks, and close 20 of those tanks, by 2006 with vitrification of all HLW by 2018. To achieve these goals, the DWPF canister production rate is projected to climb to 300 canisters per year starting in FY06, and remain at that rate through the end of the program in FY18, (Compare that to past System Plans, in which DWPF production peaked at 200 canisters per year, and the program did not complete until 2026.) An additional $247M (FY98 dollars) must be made available as requested over the ten year planning period, including a one-time $10M to enhance Late Wash attainment. If appropriate resources are made available, facility attainment issues are resolved and regulatory support is sufficient, then completion of the HLW program in 2018 would achieve a $3.3 billion cost savings to DOE, versus the cost of completing the program in 2026. Facility status information is current as of October 31, 1996

  8. Separation processes for high-level radioactive waste treatment

    International Nuclear Information System (INIS)

    Sutherland, D.G.

    1992-11-01

    During World War II, production of nuclear materials in the United States for national defense, high-level waste (HLW) was generated as a byproduct. Since that time, further quantities of HLW radionuclides have been generated by continued nuclear materials production, research, and the commercial nuclear power program. In this paper HLW is defined as the highly radioactive material resulting from the processing of spent nuclear fuel. The HLW is the liquid waste generated during the recovery of uranium and plutonium in a fuel processing plant that generally contains more than 99% of the nonvolatile fission products produced during reactor operation. Since this paper deals with waste separation processes, spent reactor fuel elements that have not been dissolved and further processed are excluded

  9. FADO 2. 0: A high level tagging language

    Energy Technology Data Exchange (ETDEWEB)

    Werner, C.M.L. (European Organization for Nuclear Research, Geneva (Switzerland). EP-Div.); Pimenta, M.; Varela, J. (LIP, Lisbon (Portugal)); Souza, J. (Rio de Janeiro Univ. (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia)

    1989-12-01

    FADO 2.0 is a high level language, developed in the context of the 4th level trigger of the DELPHI data acquisition project at CERN, that provides a simple and concise way to define physics criteria for event tagging. Its syntax is based on mathematical logic and set theory, as it was found the most appropriate framework to describe the properties of single HEP events. The language is one of the components of the FADO tagging system. The system also implements implicitly a mechanism to selectively reconstruct the event data that are needed to fulfil the physics criteria, following the speed requirements of the online data-acquisition system. A complete programming environment is now under development, which will include a syntax directed editor, and incremental compiler, a debugger and a configurer. This last tool can be used to transport the system into the context of other HEP applications, namely offline event selection and filtering. (orig.).

  10. Electropolishing decontamination system for high-level waste canisters

    International Nuclear Information System (INIS)

    Larson, D.E.; Berger, D.N.; Allen, R.P.; Bryan, G.H.; Place, B.G.

    1988-10-01

    As part of a US Department of Energy (DOE) project agreement with the Federal Ministry for Research and Technology (BMFT) in the Federal Republic of Germany (FRG). The Nuclear Waste Treatment Program at the Pacific Northwest Laboratory (PNL) is preparing 30 radioactive canisters containing borosilicate glass for use in high-level waste repository related tests at the Asse Salt Mine. After filling, the canisters will be welded closed and decontaminated in preparation for shipping to the FRG. Electropolishing was selected as the primary decontamination approach, and an electropolishing system with associated canister inspection equipment has been designed and fabricated for installation in a large hot cell. This remote electropolishing system, which is currently undergoing preliminary testing, is described in this report. 3 refs., 3 figs., 1 tab

  11. A high-level language for rule-based modelling.

    Science.gov (United States)

    Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D

    2015-01-01

    Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.

  12. Coupled processes in NRC high-level waste research

    International Nuclear Information System (INIS)

    Costanzi, F.A.

    1987-01-01

    The author discusses NRC research effort in support of evaluating license applications for disposal of nuclear waste and for promulgating regulations and issuing guidance documents on nuclear waste management. In order to do this they fund research activities at a number of laboratories, academic institutions, and commercial organizations. One of our research efforts is the coupled processes study. This paper discusses interest in coupled processes and describes the target areas of research efforts over the next few years. The specific research activities relate to the performance objectives of NRC's high-level waste (HLW) regulation and the U.S. Environmental Protection Agency (EPA) HLW standard. The general objective of the research program is to ensure the NRC has a sufficient independent technical base to make sound regulatory decisions

  13. Managing the nation's commercial high-level radioactive waste

    International Nuclear Information System (INIS)

    Cotton, T.

    1985-01-01

    With the passage of the Nuclear Waste Policy Act of 1982 (NWPA), Congress for the first time established in law a comprehensive Federal policy for commercial high-level radioactive waste management, including interim storage and permanent disposal. NWPA provides sufficient authority for developing and operating a high-level radioactive waste management system based on disposal in mined geologic repositories. Authorization for other types of waste facilities will not be required unless major problems with geologic disposal are discovered, and studies to date have identified no insurmountable technical obstacles to developing geologic repositories. The NWPA requires the Department of Energy (DOE) to submit to Congress three key documents: (1) a Mission Plan, containing both a waste management plan with a schedule for transferring waste to Federal facilities and an implementation program for choosing sites and developing technologies to carry out that plan; (2) a monitored retrievable storage (MRS) proposal, to include a site-specific design for a long-term federal storage facility, an evaluation of whether such an MRS facility is needed and feasible, and an analysis of how an MRS facility would be integrated with the repository program if authorized by Congress; and (3) a study of alternative institutional mechanisms for financing and managing the radioactive waste system, including the option of establishing an independent waste management organization outside of DOE. The Mission Plan and the report on alternative institutional mechanisms were submitted to the 99th US Congress in 1985. The MRS proposal is to be submitted in early 1986. Each of these documents is discussed following an overview of the Nuclear Waste Policy Act of 1982

  14. High-level radioactive waste in Canada. Background paper

    International Nuclear Information System (INIS)

    Fawcett, R.

    1993-11-01

    The disposal of radioactive waste is one of the most challenging environmental problems facing Canada today. Since the Second World War, when Canadian scientists first started to investigate nuclear reactions, there has been a steady accumulation of such waste. Research reactors built in the early postwar years produced small amounts of radioactive material but the volume grew steadily as the nuclear power reactors constructed during the 1960s and 1970s began to spawn used fuel bundles. Although this radioactive refuse has been safely stored for the short term, no permanent disposal system has yet been fully developed and implemented. Canada is not alone in this regard. A large number of countries use nuclear power reactors but none has yet put in place a method for the long-term disposal of the radioactive waste. Scientists and engineers throughout the world are investigating different possibilities; however, enormous difficulties remain. In Canada, used fuel bundles from nuclear reactors are defined as high-level waste; all other waste created at different stages in the nuclear fuel cycle is classified as low-level. Although disposal of low-level waste is an important issue, it is a more tractable problem than the disposal of high-level waste, on which this paper will concentrate. The paper discusses the nuclear fuel waste management program in Canada, where a long-term disposal plan has been under development by scientists and engineers over the past 15 years, but will not be completed for some time. Also discussed are responses to the program by parliamentary committees and aboriginal and environmental groups, and the work in the area being conducted in other countries. (author). 1 tab

  15. High-level radioactive waste in Canada. Background paper

    Energy Technology Data Exchange (ETDEWEB)

    Fawcett, R [Library of Parliament, Ottawa, ON (Canada). Science and Technology Div.

    1993-11-01

    The disposal of radioactive waste is one of the most challenging environmental problems facing Canada today. Since the Second World War, when Canadian scientists first started to investigate nuclear reactions, there has been a steady accumulation of such waste. Research reactors built in the early postwar years produced small amounts of radioactive material but the volume grew steadily as the nuclear power reactors constructed during the 1960s and 1970s began to spawn used fuel bundles. Although this radioactive refuse has been safely stored for the short term, no permanent disposal system has yet been fully developed and implemented. Canada is not alone in this regard. A large number of countries use nuclear power reactors but none has yet put in place a method for the long-term disposal of the radioactive waste. Scientists and engineers throughout the world are investigating different possibilities; however, enormous difficulties remain. In Canada, used fuel bundles from nuclear reactors are defined as high-level waste; all other waste created at different stages in the nuclear fuel cycle is classified as low-level. Although disposal of low-level waste is an important issue, it is a more tractable problem than the disposal of high-level waste, on which this paper will concentrate. The paper discusses the nuclear fuel waste management program in Canada, where a long-term disposal plan has been under development by scientists and engineers over the past 15 years, but will not be completed for some time. Also discussed are responses to the program by parliamentary committees and aboriginal and environmental groups, and the work in the area being conducted in other countries. (author). 1 tab.

  16. Immobilization of defense high-level waste: an assessment of technological strategies and potential regulatory goals. Volume I

    International Nuclear Information System (INIS)

    1979-06-01

    An investigation was made of the high-level radioactive waste immobilization technology programs in the U.S. and Europe, and of the associated regulatory programs and waste management perspectives in the countries studied. Purpose was to assess the ability of those programs to satisfy DOE waste management needs and U.S. regulatory requirements. This volume includes: introduction, immobilization strategies in the context of waste isolation program needs, high-level waste management as an integrated system, regulatory goals, engineered-barrier characteristics, barrier technology, high-level waste disposal programs, analysis of HLW immobilization technology in the context of policy and regulatory requirements, and waste immobilization program option

  17. High-level radioactive waste management in the United States. Background and status: 1996

    International Nuclear Information System (INIS)

    Dyer, J.R.

    1996-01-01

    The US high-level radioactive waste disposal program is investigating a site at Yucca Mountain, Nevada, to determine whether or not it is a suitable location for the development of a deep mined geologic repository. At this time, the US program is investigating a single site, although in the past, the program involved successive screening and comparison of alternate locations. The United States civilian reactor programs do not reprocess spent fuel; the high-level waste repository will be designed for the emplacement or spent fuel and a limited amount of vitrified high-level wastes from previous reprocessing in the US. The legislation enabling the US program also contains provisions for a Monitored Retrievable Storage facility, which could provide temporary storage of spent fuel accepted for disposal, and improve the flexibility of the repository development schedule

  18. Heat transfer in high-level waste management

    International Nuclear Information System (INIS)

    Dickey, B.R.; Hogg, G.W.

    1979-01-01

    Heat transfer in the storage of high-level liquid wastes, calcining of radioactive wastes, and storage of solidified wastes are discussed. Processing and storage experience at the Idaho Chemical Processing Plant are summarized for defense high-level wastes; heat transfer in power reactor high-level waste processing and storage is also discussed

  19. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  20. Design and Programming for Cable-Driven Parallel Robots in the German Pavilion at the EXPO 2015

    Directory of Open Access Journals (Sweden)

    Philipp Tempel

    2015-08-01

    Full Text Available In the German Pavilion at the EXPO 2015, two large cable-driven parallel robots are flying over the heads of the visitors representing two bees flying over Germany and displaying everyday life in Germany. Each robot consists of a mobile platform and eight cables suspended by winches and follows a desired trajectory, which needs to be computed in advance taking technical limitations, safety considerations and visual aspects into account. In this paper, a path planning software is presented, which includes the design process from developing a robot design and workspace estimation via planning complex trajectories considering technical limitations through to exporting a complete show. For a test trajectory, simulation results are given, which display the relevant trajectories and cable force distributions.

  1. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  2. The structural integrity of high level waste containers for deep disposal

    International Nuclear Information System (INIS)

    Keer, T.J.; Martindale, N.J.; Haijtink, B.

    1990-01-01

    Most countries with a nuclear power program are developing plans to dispose of high level waste in deep geological repositories. These facilities are typically in the range 500-1000m below ground. Although long term safety analyses mainly rely on the isolation function of the geological barrier, for the medium term (between 500 and 1000 years) a barrier such as a container (overpack) may play an important role. This paper addresses the mechanical/structural behavior of these structures under extreme geological pressures. The work described in the paper was conducted within the COMPAS project (Container Mechanical Performance Assessment) funded by the Commission of the European Communities and the United Kingdom Department of the Environment. The work was aimed at predicting the modes of failure and failure pressures which characterize the heavy, thick walled mild steel containers which might be considered for the disposal of vitrified waste. The work involved a considerable amount of analytical work, using 3-D non-linear finite element techniques, coupled with a large parallel program of experimental work. The experimental work consisted of a number of scale model tests in which the response of the containers was examined under external pressures as high as 120MPa. Extensive strain-gauge instrumentation was used to record the behavior of the models as they were driven to collapse. A number of comparative computer calculations were carried out by organizations from various European countries. Correlations were established between experimental and analytical data and guidelines regarding the choice of suitable software were established. The work concluded with a full 3-D simulation of the behavior of a container under long-term disposal conditions. In this analysis, non-linearities due to geological effects and material/geometry effects in the container were properly accounted for. 6 refs., 9 figs., 4 tabs

  3. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  4. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

    Energy Technology Data Exchange (ETDEWEB)

    Robert W. Numrich

    2008-04-22

    The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to

  5. Managing commercial high-level radioactive waste: summary

    International Nuclear Information System (INIS)

    1982-04-01

    This summary presents the findings and conclusions of OTA's analysis of Federal policy for the management of commercial high-level radioactive waste - an issue that has been debated over the last decade and that now appears to be moving toward major congressional action. After more than 20 years of commercial nuclear power, the Federal Government has yet to develop a broadly supported policy for fulfilling its legal responsibility for the final isolation of high-level radioactive waste. OTA's study concludes that until such a policy is adopted in law, there is a substantial risk that the false starts, shifts of policy, and fluctuating support that have plagued the final isolation program in the past will continue. The continued lack of final isolation facilities has raised two key problems that underlie debates about radioactive waste policy. First, some question the continued use of nuclear power until it is shown that safe final isolation for the resulting wastes can and will be accomplished, and argue that the failure to develop final isolation facilities is evidence that it may be an insoluble problem. Second, because there are no reprocessing facilities or federal waste isolation facilities to accept spent fuel, existing reactors are running out of spent fuel storage space, and by 1986 some may face a risk of shutting down for some period. Most of the 72,000 metric tons of spent fuel expected to be generated by the year 2000 will still be in temporary storage at that time. While it is possible that utilities could provide all necessary additional storage at reactor sites before existing basins are filled, some supplemental storage may be needed if there are delays in their efforts

  6. Immobilized high-level waste interim storage alternatives generation and analysis and decision report

    International Nuclear Information System (INIS)

    CALMUS, R.B.

    1999-01-01

    This report presents a study of alternative system architectures to provide onsite interim storage for the immobilized high-level waste produced by the Tank Waste Remediation System (TWRS) privatization vendor. It examines the contract and program changes that have occurred and evaluates their impacts on the baseline immobilized high-level waste (IHLW) interim storage strategy. In addition, this report documents the recommended initial interim storage architecture and implementation path forward

  7. Laboratory-scale vitrification and leaching of Hanford high-level waste for the purpose of simulant and glass property models validation

    International Nuclear Information System (INIS)

    Morrey, E.V.; Elliott, M.L.; Tingey, J.M.

    1993-02-01

    The Hanford Waste Vitrification Plant (HWVP) is being built to process the high-level and TRU waste into canistered glass logs for disposal in a national repository. Testing programs have been established within the Project to verify process technology using simulated waste. A parallel testing program with actual radioactive waste is being performed to confirm the validity of using simulates and glass property models for waste form qualification and process testing. The first feed type to be processed by and the first to be tested on a laboratory-scale is pretreated neutralized current acid waste (NCAW). The NCAW is a neutralized high-level waste stream generated from the reprocessing of irradiated nuclear fuel in the Plutonium and Uranium Extraction (PUREX) Plant at Hanford. As part of the fuel reprocessing, the high-level waste generated in PUREX was denitrated with sugar to form current acid waste (CAW). Sodium hydroxide and sodium nitrite were added to the CAW to minimize corrosion in the tanks, thus yielding neutralized CAW. The NCAW contains small amounts of plutonium, fission products from the irradiated fuel, stainless steel corrosion products, and iron and sulfate from the ferrous sulfamate reductant used in the PUREX process. This paper will discuss the results and status of the laboratory-scale radioactive testing

  8. Plan of deep underground construction for investigations on high-level radioactive waste storage

    International Nuclear Information System (INIS)

    Mayanovskij, M.S.

    1996-01-01

    The program of studies of the Japanese PNC corporation on construction of deep underground storage for high-level radioactive wastes is presented. The program is intended for 20 years. The total construction costs equal about 20 billion yen. The total cost of the project is equal to 60 billion yen. The underground part is planned to reach 1000 m depth

  9. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  10. Thermo-aeraulics of high level waste storage facilities

    International Nuclear Information System (INIS)

    Lagrave, Herve; Gaillard, Jean-Philippe; Laurent, Franck; Ranc, Guillaume; Duret, Bernard

    2006-01-01

    This paper discusses the research undertaken in response to axis 3 of the 1991 radioactive waste management act, and possible solutions concerning the processes under consideration for conditioning and long-term interim storage of long-lived radioactive waste. The notion of 'long-term' is evaluated with respect to the usual operating lifetime of a basic nuclear installation, about 50 years. In this context, 'long-term' is defined on a secular time scale: the lifetime of the facility could be as long as 300 years. The waste package taken into account is characterized notably by its high thermal power release. Studies were carried out in dedicated facilities for vitrified waste and for spent UOX and MOX fuel. The latter are not considered as wastes, owing to the value of the reusable material they contain. Three primary objectives have guided the design of these long-term interim storage facilities: - ensure radionuclide containment at all times; - permit retrieval of the containers at any time; - minimize surveillance; - maintenance costs. The CEA has also investigated surface and subsurface facilities. It was decided to work on generic sites with a reasonable set of parameters values that should be applicable at most sites in France. All the studies and demonstrations to date lead to the conclusion that long-term interim storage is technically feasible. The paper addresses the following items: - Long-term interim storage concepts for high-level waste; - Design principles and options for the interim storage facilities; - General architecture; - Research topics, Storage facility ventilation, Dimensioning of the facility; - Thermo-aeraulics of a surface interim storage facility; - VALIDA surface loop, VALIDA single container test campaign, Continuation of the VALIDA program; - Thermo-aeraulics of a network of subsurface interim storage galleries; - SIGAL subsurface loop; - PROMETHEE subsurface loop; - Temperature behaviour of the concrete structures; - GALATEE

  11. Thermo-aeraulics of high level waste storage facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lagrave, Herve; Gaillard, Jean-Philippe; Laurent, Franck; Ranc, Guillaume [CEA/Valrho, B.P. 17171, F-30207 Bagnols-sur-Ceze (France); Duret, Bernard [CEA Grenoble, 17 rue des Martyrs, 38054 Grenoble cedex 9 (France)

    2006-07-01

    This paper discusses the research undertaken in response to axis 3 of the 1991 radioactive waste management act, and possible solutions concerning the processes under consideration for conditioning and long-term interim storage of long-lived radioactive waste. The notion of 'long-term' is evaluated with respect to the usual operating lifetime of a basic nuclear installation, about 50 years. In this context, 'long-term' is defined on a secular time scale: the lifetime of the facility could be as long as 300 years. The waste package taken into account is characterized notably by its high thermal power release. Studies were carried out in dedicated facilities for vitrified waste and for spent UOX and MOX fuel. The latter are not considered as wastes, owing to the value of the reusable material they contain. Three primary objectives have guided the design of these long-term interim storage facilities: - ensure radionuclide containment at all times; - permit retrieval of the containers at any time; - minimize surveillance; - maintenance costs. The CEA has also investigated surface and subsurface facilities. It was decided to work on generic sites with a reasonable set of parameters values that should be applicable at most sites in France. All the studies and demonstrations to date lead to the conclusion that long-term interim storage is technically feasible. The paper addresses the following items: - Long-term interim storage concepts for high-level waste; - Design principles and options for the interim storage facilities; - General architecture; - Research topics, Storage facility ventilation, Dimensioning of the facility; - Thermo-aeraulics of a surface interim storage facility; - VALIDA surface loop, VALIDA single container test campaign, Continuation of the VALIDA program; - Thermo-aeraulics of a network of subsurface interim storage galleries; - SIGAL subsurface loop; - PROMETHEE subsurface loop; - Temperature behaviour of the concrete

  12. Evaluation of radionuclide concentrations in high-level radioactive wastes

    International Nuclear Information System (INIS)

    Fehringer, D.J.

    1985-10-01

    This report describes a possible approach for development of a numerical definition of the term ''high-level radioactive waste.'' Five wastes are identified which are recognized as being high-level wastes under current, non-numerical definitions. The constituents of these wastes are examined and the most hazardous component radionuclides are identified. This report suggests that other wastes with similar concentrations of these radionuclides could also be defined as high-level wastes. 15 refs., 9 figs., 4 tabs

  13. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  14. Comparisons of Energy Management Methods for a Parallel Plug-In Hybrid Electric Vehicle between the Convex Optimization and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2018-01-01

    Full Text Available This paper proposes a comparison study of energy management methods for a parallel plug-in hybrid electric vehicle (PHEV. Based on detailed analysis of the vehicle driveline, quadratic convex functions are presented to describe the nonlinear relationship between engine fuel-rate and battery charging power at different vehicle speed and driveline power demand. The engine-on power threshold is estimated by the simulated annealing (SA algorithm, and the battery power command is achieved by convex optimization with target of improving fuel economy, compared with the dynamic programming (DP based method and the charging depleting–charging sustaining (CD/CS method. In addition, the proposed control methods are discussed at different initial battery state of charge (SOC values to extend the application. Simulation results validate that the proposed strategy based on convex optimization can save the fuel consumption and reduce the computation burden obviously.

  15. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  16. Jordan's 2002 to 2012 Fertility Stall and Parallel USAID Investments in Family Planning: Lessons From an Assessment to Guide Future Programming.

    Science.gov (United States)

    Spindler, Esther; Bitar, Nisreen; Solo, Julie; Menstell, Elizabeth; Shattuck, Dominick

    2017-12-28

    Health practitioners, researchers, and donors are stumped about Jordan's stalled fertility rate, which has stagnated between 3.7 and 3.5 children per woman from 2002 to 2012, above the national replacement level of 2.1. This stall paralleled United States Agency for International Development (USAID) funding investments in family planning in Jordan, triggering an assessment of USAID family planning programming in Jordan. This article describes the methods, results, and implications of the programmatic assessment. Methods included an extensive desk review of USAID programs in Jordan and 69 interviews with reproductive health stakeholders. We explored reasons for fertility stagnation in Jordan's total fertility rate (TFR) and assessed the effects of USAID programming on family planning outcomes over the same time period. The assessment results suggest that the increased use of less effective methods, in particular withdrawal and condoms, are contributing to Jordan's TFR stall. Jordan's limited method mix, combined with strong sociocultural determinants around reproduction and fertility desires, have contributed to low contraceptive effectiveness in Jordan. Over the same time period, USAID contributions toward increasing family planning access and use, largely focused on service delivery programs, were extensive. Examples of effective initiatives, among others, include task shifting of IUD insertion services to midwives due to a shortage of female physicians. However, key challenges to improved use of family planning services include limited government investments in family planning programs, influential service provider behaviors and biases that limit informed counseling and choice, pervasive strong social norms of family size and fertility, and limited availability of different contraceptive methods. In contexts where sociocultural norms and a limited method mix are the dominant barriers toward improved family planning use, increased national government investments

  17. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  18. High-level PC-based laser system modeling

    Science.gov (United States)

    Taylor, Michael S.

    1991-05-01

    Since the inception of the Strategic Defense Initiative (SDI) there have been a multitude of comparison studies done in an attempt to evaluate the effectiveness and relative sizes of complementary, and sometimes competitive, laser weapon systems. It became more and more apparent that what the systems analyst needed was not only a fast, but a cost effective way to perform high-level trade studies. In the present investigation, a general procedure is presented for the development of PC-based algorithmic systems models for laser systems. This procedure points out all of the major issues that should be addressed in the design and development of such a model. Issues addressed include defining the problem to be modeled, defining a strategy for development, and finally, effective use of the model once developed. Being a general procedure, it will allow a systems analyst to develop a model to meet specific needs. To illustrate this method of model development, a description of the Strategic Defense Simulation - Design To (SDS-DT) model developed and used by Science Applications International Corporation (SAIC) is presented. SDS-DT is a menu-driven, fast executing, PC-based program that can be used to either calculate performance, weight, volume, and cost values for a particular design or, alternatively, to run parametrics on particular system parameters to perhaps optimize a design.

  19. High-Level Performance Modeling of SAR Systems

    Science.gov (United States)

    Chen, Curtis

    2006-01-01

    SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.

  20. NOx AND HETEROGENEITY EFFECTS IN HIGH LEVEL WASTE (HLW)

    International Nuclear Information System (INIS)

    Meisel, Dan; Camaioni, Donald M.; Orlando, Thom

    2000-01-01

    We summarize contributions from our EMSP supported research to several field operations of the Office of Environmental Management (EM). In particular we emphasize its impact on safety programs at the Hanford and other EM sites where storage, maintenance and handling of HLW is a major mission. In recent years we were engaged in coordinated efforts to understand the chemistry initiated by radiation in HLW. Three projects of the EMSP (''The NOx System in Nuclear Waste,'' ''Mechanisms and Kinetics of Organic Aging in High Level Nuclear Wastes, D. Camaioni--PI'' and ''Interfacial Radiolysis Effects in Tanks Waste, T. Orlando--PI'') were involved in that effort, which included a team at Argonne, later moved to the University of Notre Dame, and two teams at the Pacific Northwest National Laboratory. Much effort was invested in integrating the results of the scientific studies into the engineering operations via coordination meetings and participation in various stages of the resolution of some of the outstanding safety issues at the sites. However, in this Abstract we summarize the effort at Notre Dame

  1. The disposal of high-level radioactive waste. Vol. 1

    International Nuclear Information System (INIS)

    Parker, F.L.; Broshears, R.E.; Pasztor, J.

    1984-01-01

    The Beijer Institute received request from the Swedish Board for Spent Nuclear Fuel (Naemnden for Anvaent Kaernbraensle - NAK) to undertake an international review of the major programmes which were currently making arrangements for the future disposal of high-level radioactive wastes and spent nuclear fuel. The request was accepted, a detailed proposal was worked out and agreed to by NAK, for a critical technical review which concentrated on the following three main tasks: 1. a 'state-of-the-art' review of selected ongoing disposal programmes, both national and international; 2. an assessment of the scientific and technical controversies involved, and 3. recommendations for further research in this field. This review work was to be built on a survey of the available technical literature which was to serve as a basis for a series of detailed interviews, consultations and discussions with scientific and technical experts in Japan, Canada, USA, Belgium, Federal Republic of Germany, France, Switzerland and the United Kingdom. This first volume contains: disposal options; review of the state-of-the-art (international activities, national programs); analysis of waste disposal systems. (orig./HP)

  2. Synroc - a multiphase ceramic for high level nuclear waste immobilisation

    International Nuclear Information System (INIS)

    Reeve, K.D.; Vance, E.R.; Hart, K.P.; Smith, K.L.; Lumpkin, G.R.; Mercer, D.J.

    1992-01-01

    Many natural minerals - particularly titanates - are very durable geochemically, having survived for millions of years with very little alteration. Moreover, some of these minerals have quantitatively retained radioactive elements and their daughter products over this time. The Synroc concept mimics nature by providing an all-titanate synthetic mineral phase assemblage to immobilise high level waste (HLW) from nuclear fuel reprocessing operations for safe geological disposal. In principle, many chemically hazardous inorganic wastes arising from industry could also be immobilised in highly durable ceramics and disposed of geologically, but in practice the cost structure of most industries is such that lower cost waste management solutions - for example, the development of reusable by-products or the use of cements rather than ceramics - have to be devised. In many thousands of aqueous leach tests at ANSTO, mostly at 70-90 deg C, Synroc has been shown to be exceptionally durable. The emphases of the recent ANSTO program have been on tailoring of the Synroc composition to varying HLW compositions, leach testing of Synroc containing radioactive transuranic actinides, study of leaching mechanisms by SEM and TEM, and the development and costing of a conceptual fully active Synroc fabrication plant design. A summary of recent results on these topics will be presented. 29 refs., 4 figs

  3. Concentration of High Level Radioactive Liquid Waste. Basic data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Juvenelle, A.; Masson, M.; Garrido, M.H. [DEN/VRH/DRCP/SCPS/LPCP, BP 17171 - 30207 Bagnols sur Ceze Cedex (France)

    2008-07-01

    Full text of publication follows: In order to enhance its knowledge about the concentration of high level liquid waste (HLLW) from the nuclear fuel reprocessing process, a program of studies was defined by Cea. In a large field of acidity, it proposes to characterize the concentrated solution and the obtained precipitates versus the concentration factor. Four steps are considered: quantification of the salting-out effect on the concentrate acidity, acquisition of solubility data, precipitates characterisation versus the concentration factor through aging tests and concentration experimentation starting from simulated fission products solutions. The first results, reported here, connect the acidity of the concentrated solution to the concentration factor and allow us to precise the field of acidity (4 to 12 N) for the next experiments. In this field, solubility data of various elements (Ba, Sr, Zr...) are separately measured at room temperature, in nitric acid in a first time, then in the presence of various species present in medium (TBP, PO{sub 4}{sup 3-}). The reactions between these various elements are then investigated (formation of insoluble mixed compounds) by following the concentration cations in solution and characterising the precipitates. (authors)

  4. Salt removal from tanks containing high-level radioactive waste

    International Nuclear Information System (INIS)

    Kiser, D.L.

    1981-01-01

    At the Savannah River Plant (SRP), there are 23 waste storage tanks containing high-level radioactive wastes that are to be retired. These tanks contain about 23 million liters of salt and about 10 million liters of sludge, that are to be relocated to new Type III, fully stress-relieved tanks with complete secondary containment. About 19 million liters of salt cake are to be dissolved. Steam jet circulators were originally proposed for the salt dissolution program. However, use of steam jet circulators raised the temperature of the tank contents and caused operating problems. These included increased corrosion risk and required long cooldown periods prior to transfer. Alternative dissolution concepts were investigated. Examination of mechanisms affecting salt dissolution showed that the ability of fresh water to contact the cake surface was the most significant factor influencing dissolution rate. Density driven and mechanical agitation techniques were developed on a bench scale and then were demonstrated in an actual waste tank. Actual waste tank demonstrations were in good agreement with bench-scale experiments at 1/85 scale. The density driven method utilizes simple equipment, but leaves a cake heel in the tank and is hindered by the presence of sludge or Zeolite in the salt cake. Mechanical agitation overcomes the problems found with both steam jet circulators and the density driven technique and is the best method for future waste tank salt removal

  5. High level natural radiation areas with special regard to Ramsar

    International Nuclear Information System (INIS)

    Sohrabi, M.

    1993-01-01

    The studies of high level natural radiation areas (HLNRAs) around the world are of great importance for determination of risks due to long-term low-level whole body exposures of public. Many areas of the world possess HLNRAs the number of which depends on the criteria defined. Detailed radiological studies have been carried out in some HLNRAs the results of which have been reported at least in three international conferences. Among the HLNRAs, Ramsar has so far the highest level of natural radiation in some areas where radiological studies have been of concern. A program was established for Ramsar and its HLNRAs to study indoor and outdoor gamma exposures and external and internal doses of the inhabitants, 226 Ra content of public water supplies and hot springs, of food stuffs, etc., 222 Rn levels measured in 473 rooms of near 350 houses, 16 schools and 89 rooms and many locations of old and new Ramsar Hotels in different seasons, cytogenetic effects on inhabitants of Talesh Mahalleh, the highest radiation area, compared to that of a control area and radiological parameters of a house with a high potential for internal and external exposures of the inhabitants. It was concluded that the epidemiological studies in a number of countries did not show any evidence of increased health detriment in HLNRAs compared to control groups. In this paper, the conclusions drawn from studies in some HLNRAs around the world in particular Ramsar are discussed. (author). 20 refs, 2 figs, 1 tab

  6. Potential host media for a high-level waste repository

    Energy Technology Data Exchange (ETDEWEB)

    Hustrulid, W

    1982-01-01

    Earlier studies of burial of radioactive wastes in geologic repositories had concentrated on salt formations for well-publicized reasons. However, under the Carter administration, significant changes were made in the US nuclear waste management program. Changes which were made were: (1) expansion of the number of rock types under consideration; (2) adoption of the multiple-barrier approach to waste containment; (3) additional requirements for waste retrieval; and (4) new criteria proposed by the Nuclear Regulatory Commission for the isolation of high-level waste in geologic repositories. Results of the studies of different types of rocks as repository sites are summarized herein. It is concluded that each generic rock type has certain advantages and disadvantages when considered from various aspects of the waste disposal problem and that characteristics of rocks are so varied that a most favorable or least favorable rock type cannot be easily identified. This lack of definitive characteristics of rocks makes site selection and good engineering barriers very important for containment of the wastes. (BLM)

  7. 40 CFR 227.30 - High-level radioactive waste.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false High-level radioactive waste. 227.30...-level radioactive waste. High-level radioactive waste means the aqueous waste resulting from the operation of the first cycle solvent extraction system, or equivalent, and the concentrated waste from...

  8. Discovery of high-level tasks in the operating room

    NARCIS (Netherlands)

    Bouarfa, L.; Jonker, P.P.; Dankelman, J.

    2010-01-01

    Recognizing and understanding surgical high-level tasks from sensor readings is important for surgical workflow analysis. Surgical high-level task recognition is also a challenging task in ubiquitous computing because of the inherent uncertainty of sensor data and the complexity of the operating

  9. Characteristics of solidified high-level waste products

    International Nuclear Information System (INIS)

    1979-01-01

    The object of the report is to contribute to the establishment of a data bank for future preparation of codes of practice and standards for the management of high-level wastes. The work currently in progress on measuring the properties of solidified high-level wastes is being studied

  10. Process for solidifying high-level nuclear waste

    Science.gov (United States)

    Ross, Wayne A.

    1978-01-01

    The addition of a small amount of reducing agent to a mixture of a high-level radioactive waste calcine and glass frit before the mixture is melted will produce a more homogeneous glass which is leach-resistant and suitable for long-term storage of high-level radioactive waste products.

  11. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  12. Spent nuclear fuel project high-level information management plan

    Energy Technology Data Exchange (ETDEWEB)

    Main, G.C.

    1996-09-13

    This document presents the results of the Spent Nuclear Fuel Project (SNFP) Information Management Planning Project (IMPP), a short-term project that identified information management (IM) issues and opportunities within the SNFP and outlined a high-level plan to address them. This high-level plan for the SNMFP IM focuses on specific examples from within the SNFP. The plan`s recommendations can be characterized in several ways. Some recommendations address specific challenges that the SNFP faces. Others form the basis for making smooth transitions in several important IM areas. Still others identify areas where further study and planning are indicated. The team`s knowledge of developments in the IM industry and at the Hanford Site were crucial in deciding where to recommend that the SNFP act and where they should wait for Site plans to be made. Because of the fast pace of the SNFP and demands on SNFP staff, input and interaction were primarily between the IMPP team and members of the SNFP Information Management Steering Committee (IMSC). Key input to the IMPP came from a workshop where IMSC members and their delegates developed a set of draft IM principles. These principles, described in Section 2, became the foundation for the recommendations found in the transition plan outlined in Section 5. Availability of SNFP staff was limited, so project documents were used as a basis for much of the work. The team, realizing that the status of the project and the environment are continually changing, tried to keep abreast of major developments since those documents were generated. To the extent possible, the information contained in this document is current as of the end of fiscal year (FY) 1995. Programs and organizations on the Hanford Site as a whole are trying to maximize their return on IM investments. They are coordinating IM activities and trying to leverage existing capabilities. However, the SNFP cannot just rely on Sitewide activities to meet its IM requirements

  13. Mental models of adherence: parallels in perceptions, values, and expectations in adherence to prescribed home exercise programs and other personal regimens.

    Science.gov (United States)

    Rizzo, Jon; Bell, Alexandra

    2018-05-09

    A mental model is the collection of an individual's perceptions, values, and expectations about a particular aspect of their life, which strongly influences behaviors. This study explored orthopedic outpatients mental models of adherence to prescribed home exercise programs and how they related to mental models of adherence to other types of personal regimens. The study followed an interpretive description qualitative design. Data were collected via two semi-structured interviews. Interview One focused on participants prior experiences adhering to personal regimens. Interview Two focused on experiences adhering to their current prescribed home exercise program. Data analysis followed a constant comparative method. Findings revealed similarity in perceptions, values, and expectations that informed individuals mental models of adherence to personal regimens and prescribed home exercise programs. Perceived realized results, expected results, perceived social supports, and value of convenience characterized mental models of adherence. Parallels between mental models of adherence for prescribed home exercise and other personal regimens suggest that patients adherence behavior to prescribed routines may be influenced by adherence experiences in other aspects of their lives. By gaining insight into patients adherence experiences, values, and expectations across life domains, clinicians may tailor supports that enhance home exercise adherence. Implications for Rehabilitation A mental model is the collection of an individual's perceptions, values, and expectations about a particular aspect of their life, which is based on prior experiences and strongly influences behaviors. This study demonstrated similarity in orthopedic outpatients mental models of adherence to prescribed home exercise programs and adherence to personal regimens in other aspects of their lives. Physical therapists should inquire about patients non-medical adherence experiences, as strategies patients

  14. Internal combustion engine control for series hybrid electric vehicles by parallel and distributed genetic programming/multiobjective genetic algorithms

    Science.gov (United States)

    Gladwin, D.; Stewart, P.; Stewart, J.

    2011-02-01

    This article addresses the problem of maintaining a stable rectified DC output from the three-phase AC generator in a series-hybrid vehicle powertrain. The series-hybrid prime power source generally comprises an internal combustion (IC) engine driving a three-phase permanent magnet generator whose output is rectified to DC. A recent development has been to control the engine/generator combination by an electronically actuated throttle. This system can be represented as a nonlinear system with significant time delay. Previously, voltage control of the generator output has been achieved by model predictive methods such as the Smith Predictor. These methods rely on the incorporation of an accurate system model and time delay into the control algorithm, with a consequent increase in computational complexity in the real-time controller, and as a necessity relies to some extent on the accuracy of the models. Two complementary performance objectives exist for the control system. Firstly, to maintain the IC engine at its optimal operating point, and secondly, to supply a stable DC supply to the traction drive inverters. Achievement of these goals minimises the transient energy storage requirements at the DC link, with a consequent reduction in both weight and cost. These objectives imply constant velocity operation of the IC engine under external load disturbances and changes in both operating conditions and vehicle speed set-points. In order to achieve these objectives, and reduce the complexity of implementation, in this article a controller is designed by the use of Genetic Programming methods in the Simulink modelling environment, with the aim of obtaining a relatively simple controller for the time-delay system which does not rely on the implementation of real time system models or time delay approximations in the controller. A methodology is presented to utilise the miriad of existing control blocks in the Simulink libraries to automatically evolve optimal control

  15. Key scientific challenges in geological disposal of high level radioactive waste

    International Nuclear Information System (INIS)

    Wang Ju

    2007-01-01

    The geological disposal of high radioactive waste is a challenging task facing the scientific and technical world. This paper introduces the latest progress of high level radioactive disposal programs in the latest progress of high level radioactive disposal programs in the world, and discusses the following key scientific challenges: (1) precise prediction of the evolution of a repository site; (2) characteristics of deep geological environment; (3) behaviour of deep rock mass, groundwater and engineering material under coupled con-ditions (intermediate to high temperature, geostress, hydraulic, chemical, biological and radiation process, etc); (4) geo-chemical behaviour of transuranic radionuclides with low concentration and its migration with groundwater; and (5) safety assessment of disposal system. Several large-scale research projects and several hot topics related with high-level waste disposal are also introduced. (authors)

  16. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  17. National high-level waste systems analysis report

    Energy Technology Data Exchange (ETDEWEB)

    Kristofferson, K.; Oholleran, T.P.; Powell, R.H.

    1995-09-01

    This report documents the assessment of budgetary impacts, constraints, and repository availability on the storage and treatment of high-level waste and on both existing and pending negotiated milestones. The impacts of the availabilities of various treatment systems on schedule and throughput at four Department of Energy sites are compared to repository readiness in order to determine the prudent application of resources. The information modeled for each of these sites is integrated with a single national model. The report suggests a high-level-waste model that offers a national perspective on all high-level waste treatment and storage systems managed by the Department of Energy.

  18. National high-level waste systems analysis report

    International Nuclear Information System (INIS)

    Kristofferson, K.; Oholleran, T.P.; Powell, R.H.

    1995-09-01

    This report documents the assessment of budgetary impacts, constraints, and repository availability on the storage and treatment of high-level waste and on both existing and pending negotiated milestones. The impacts of the availabilities of various treatment systems on schedule and throughput at four Department of Energy sites are compared to repository readiness in order to determine the prudent application of resources. The information modeled for each of these sites is integrated with a single national model. The report suggests a high-level-waste model that offers a national perspective on all high-level waste treatment and storage systems managed by the Department of Energy

  19. Both the caspase CSP-1 and a caspase-independent pathway promote programmed cell death in parallel to the canonical pathway for apoptosis in Caenorhabditis elegans.

    Directory of Open Access Journals (Sweden)

    Daniel P Denning

    Full Text Available Caspases are cysteine proteases that can drive apoptosis in metazoans and have critical functions in the elimination of cells during development, the maintenance of tissue homeostasis, and responses to cellular damage. Although a growing body of research suggests that programmed cell death can occur in the absence of caspases, mammalian studies of caspase-independent apoptosis are confounded by the existence of at least seven caspase homologs that can function redundantly to promote cell death. Caspase-independent programmed cell death is also thought to occur in the invertebrate nematode Caenorhabditis elegans. The C. elegans genome contains four caspase genes (ced-3, csp-1, csp-2, and csp-3, of which only ced-3 has been demonstrated to promote apoptosis. Here, we show that CSP-1 is a pro-apoptotic caspase that promotes programmed cell death in a subset of cells fated to die during C. elegans embryogenesis. csp-1 is expressed robustly in late pachytene nuclei of the germline and is required maternally for its role in embryonic programmed cell deaths. Unlike CED-3, CSP-1 is not regulated by the APAF-1 homolog CED-4 or the BCL-2 homolog CED-9, revealing that csp-1 functions independently of the canonical genetic pathway for apoptosis. Previously we demonstrated that embryos lacking all four caspases can eliminate cells through an extrusion mechanism and that these cells are apoptotic. Extruded cells differ from cells that normally undergo programmed cell death not only by being extruded but also by not being engulfed by neighboring cells. In this study, we identify in csp-3; csp-1; csp-2 ced-3 quadruple mutants apoptotic cell corpses that fully resemble wild-type cell corpses: these caspase-deficient cell corpses are morphologically apoptotic, are not extruded, and are internalized by engulfing cells. We conclude that both caspase-dependent and caspase-independent pathways promote apoptotic programmed cell death and the phagocytosis of cell

  20. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  1. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  2. Electric Grid Expansion Planning with High Levels of Variable Generation

    Energy Technology Data Exchange (ETDEWEB)

    Hadley, Stanton W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); You, Shutang [Univ. of Tennessee, Knoxville, TN (United States); Shankar, Mallikarjun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-02-01

    Renewables are taking a large proportion of generation capacity in U.S. power grids. As their randomness has increasing influence on power system operation, it is necessary to consider their impact on system expansion planning. To this end, this project studies the generation and transmission expansion co-optimization problem of the US Eastern Interconnection (EI) power grid with a high wind power penetration rate. In this project, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. This study analyzed a time series creation method to capture the diversity of load and wind power across balancing regions in the EI system. The obtained time series can be easily introduced into the MIP co-optimization problem and then solved robustly through available MIP solvers. Simulation results show that the proposed time series generation method and the expansion co-optimization model and can improve the expansion result significantly after considering the diversity of wind and load across EI regions. The improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare. This study shows that modelling load and wind variations and diversities across balancing regions will produce significantly different expansion result compared with former studies. For example, if wind is modeled in more details (by increasing the number of wind output levels) so that more wind blocks are considered in expansion planning, transmission expansion will be larger and the expansion timing will be earlier. Regarding generation expansion, more wind scenarios will slightly reduce wind generation expansion in the EI system and increase the expansion of other generation such as gas. Also, adopting detailed wind scenarios will reveal that it may be uneconomic to expand transmission networks for transmitting a large amount of wind power through a long distance

  3. Material chemistry challenges in vitrification of high level radioactive waste

    International Nuclear Information System (INIS)

    Kaushik, C.P.

    2008-01-01

    Full text: Nuclear technology with an affective environmental management plan and focused attention on safety measures is a much cleaner source of electricity generation as compared to other sources. With this perspective, India has undertaken nuclear energy program to share substantial part of future need of power. Safe containment and isolation of nuclear waste from human environment is an indispensable part of this programme. Majority of radioactivity in the entire nuclear fuel cycle is high level radioactive liquid waste (HLW), which is getting generated during reprocessing of spent nuclear fuels. A three stage strategy for management of HLW has been adopted in India. This involves (i) immobilization of waste oxides in stable and inert solid matrices, (ii) interim retrievable storage of the conditioned waste product under continuous cooling and (iii) disposal in deep geological formation. Borosilicate glass matrix has been adopted in India for immobilization of HLW. Material issue are very important during the entire process of waste immobilization. Performance of the materials used in nuclear waste management determines its safety/hazards. Material chemistry therefore has a significant bearing on immobilization science and its technological development for management of HLW. The choice of suitable waste form to deploy for nuclear waste immobilization is difficult decision and the durability of the conditioned product is not the sole criterion. In any immobilization process, where radioactive materials are involved, the process and operational conditions play an important role in final selection of a suitable glass formulation. In remotely operated vitrification process, study of chemistry of materials like glass, melter, materials of construction of other equipment under high temperature and hostile corrosive condition assume significance for safe and un-interrupted vitrification of radioactive to ensure its isolation waste from human environment. The present

  4. PLUTONIUM/HIGH-LEVEL VITRIFIED WASTE BDBE DOSE CALCULATION

    International Nuclear Information System (INIS)

    D.C. Richardson

    2003-01-01

    In accordance with the Nuclear Waste Policy Amendments Act of 1987, Yucca Mountain was designated as the site to be investigated as a potential repository for the disposal of high-level radioactive waste. The Yucca Mountain site is an undeveloped area located on the southwestern edge of the Nevada Test Site (NTS), about 100 miles northwest of Las Vegas. The site currently lacks rail service or an existing right-of-way. If the Yucca Mountain site is found suitable for the repository, rail service is desirable to the Office of Civilian Waste Management (OCRWM) Program because of the potential of rail transportation to reduce costs and to reduce the number of shipments relative to highway transportation. A Preliminary Rail Access Study evaluated 13 potential rail spur options. Alternative routes within the major options were also developed. Each of these options was then evaluated for potential land use conflicts and access to regional rail carriers. Three potential routes having few land use conflicts and having access to regional carriers were recommended for further investigation. Figure 1-1 shows these three routes. The Jean route is estimated to be about 120 miles long, the Carlin route to be about 365 miles long, and Caliente route to be about 365 miles long. The remaining ten routes continue to be monitored and should any of the present conflicts change, a re-evaluation of that route will be made. Complete details of the evaluation of the 13 routes can be found in the previous study. The DOE has not identified any preferred route and recognizes that the transportation issues need a full and open treatment under the National Environmental Policy Act. The issue of transportation will be included in public hearings to support development of the Environmental Impact Statement (EIS) proceedings for either the Monitored Retrievable Storage Facility or the Yucca Mountain Project or both

  5. PLUTONIUM/HIGH-LEVEL VITRIFIED WASTE BDBE DOSE CALCULATION

    Energy Technology Data Exchange (ETDEWEB)

    D.C. Richardson

    2003-03-19

    In accordance with the Nuclear Waste Policy Amendments Act of 1987, Yucca Mountain was designated as the site to be investigated as a potential repository for the disposal of high-level radioactive waste. The Yucca Mountain site is an undeveloped area located on the southwestern edge of the Nevada Test Site (NTS), about 100 miles northwest of Las Vegas. The site currently lacks rail service or an existing right-of-way. If the Yucca Mountain site is found suitable for the repository, rail service is desirable to the Office of Civilian Waste Management (OCRWM) Program because of the potential of rail transportation to reduce costs and to reduce the number of shipments relative to highway transportation. A Preliminary Rail Access Study evaluated 13 potential rail spur options. Alternative routes within the major options were also developed. Each of these options was then evaluated for potential land use conflicts and access to regional rail carriers. Three potential routes having few land use conflicts and having access to regional carriers were recommended for further investigation. Figure 1-1 shows these three routes. The Jean route is estimated to be about 120 miles long, the Carlin route to be about 365 miles long, and Caliente route to be about 365 miles long. The remaining ten routes continue to be monitored and should any of the present conflicts change, a re-evaluation of that route will be made. Complete details of the evaluation of the 13 routes can be found in the previous study. The DOE has not identified any preferred route and recognizes that the transportation issues need a full and open treatment under the National Environmental Policy Act. The issue of transportation will be included in public hearings to support development of the Environmental Impact Statement (EIS) proceedings for either the Monitored Retrievable Storage Facility or the Yucca Mountain Project or both.

  6. Handling and storage of conditioned high-level wastes

    International Nuclear Information System (INIS)

    1983-01-01

    This report deals with certain aspects of the management of one of the most important wastes, i.e. the handling and storage of conditioned (immobilized and packaged) high-level waste from the reprocessing of spent nuclear fuel and, although much of the material presented here is based on information concerning high-level waste from reprocessing LWR fuel, the principles, as well as many of the details involved, are applicable to all fuel types. The report provides illustrative background material on the arising and characteristics of high-level wastes and, qualitatively, their requirements for conditioning. The report introduces the principles important in conditioned high-level waste storage and describes the types of equipment and facilities, used or studied, for handling and storage of such waste. Finally, it discusses the safety and economic aspects that are considered in the design and operation of handling and storage facilities

  7. Technical career opportunities in high-level radioactive waste management

    International Nuclear Information System (INIS)

    1993-01-01

    Technical career opportunities in high-level radioactive waste management are briefly described in the areas of: Hydrology; geology; biological sciences; mathematics; engineering; heavy equipment operation; and skilled labor and crafts

  8. Glasses used for the high level radioactive wastes storage

    International Nuclear Information System (INIS)

    Sombret, C.

    1983-06-01

    High level radioactive wastes generated by the reprocessing of spent fuels is an important concern in the conditioning of radioactive wastes. This paper deals with the status of the knowledge about glasses used for the treatment of these liquids [fr

  9. Handling and storage of conditioned high-level wastes

    International Nuclear Information System (INIS)

    Heafield, W.

    1984-01-01

    This paper deals with certain aspects of the management of one of the most important radioactive wastes arising from the nuclear fuel cycle, i.e. the handling and storage of conditioned high-level wastes. The paper is based on an IAEA report of the same title published during 1983 in the Technical Reports Series. The paper provides illustrative background material on the characteristics of high-level wastes and, qualitatively, their requirements for conditioning. The principles important in the storage of high-level wastes are reviewed in conjunction with the radiological and socio-political considerations involved. Four fundamentally different storage concepts are described with reference to published information and the safety aspects of particular storage concepts are discussed. Finally, overall conclusions are presented which confirm the availability of technology for constructing and operating conditioned high-level waste storage facilities for periods of at least several decades. (author)

  10. Development of melt compositions for sulphate bearing high level waste

    International Nuclear Information System (INIS)

    Jahagirdar, P.B.; Wattal, P.K.

    1997-09-01

    The report deals with the development and characterization of vitreous matrices for sulphate bearing high level waste. Studies were conducted in sodium borosilicate and lead borosilicate systems with the introduction of CaO, BaO, MgO etc. Lead borosilicate system was found to be compatible with sulphate bearing high level wastes. Detailed product evaluation carried on selected formulations is also described. (author)

  11. Properties and characteristics of high-level waste glass

    International Nuclear Information System (INIS)

    Ross, W.A.

    1977-01-01

    This paper has briefly reviewed many of the characteristics and properties of high-level waste glasses. From this review, it can be noted that glass has many desirable properties for solidification of high-level wastes. The most important of these include: (1) its low leach rate; (2) the ability to tolerate large changes in waste composition; (3) the tolerance of anticipated storage temperatures; (4) its low surface area even after thermal shock or impact

  12. High-Level Waste System Process Interface Description

    International Nuclear Information System (INIS)

    D'Entremont, P.D.

    1999-01-01

    The High-Level Waste System is a set of six different processes interconnected by pipelines. These processes function as one large treatment plant that receives, stores, and treats high-level wastes from various generators at SRS and converts them into forms suitable for final disposal. The three major forms are borosilicate glass, which will be eventually disposed of in a Federal Repository, Saltstone to be buried on site, and treated water effluent that is released to the environment

  13. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  14. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  15. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  16. Cascade Boosting-Based Object Detection from High-Level Description to Hardware Implementation

    Directory of Open Access Journals (Sweden)

    K. Khattab

    2009-01-01

    Full Text Available Object detection forms the first step of a larger setup for a wide variety of computer vision applications. The focus of this paper is the implementation of a real-time embedded object detection system while relying on high-level description language such as SystemC. Boosting-based object detection algorithms are considered as the fastest accurate object detection algorithms today. However, the implementation of a real time solution for such algorithms is still a challenge. A new parallel implementation, which exploits the parallelism and the pipelining in these algorithms, is proposed. We show that using a SystemC description model paired with a mainstream automatic synthesis tool can lead to an efficient embedded implementation. We also display some of the tradeoffs and considerations, for this implementation to be effective. This implementation proves capable of achieving 42 fps for 320×240 images as well as bringing regularity in time consuming.

  17. Site selection and characterization processes for deep geologic disposal of high level nuclear waste

    International Nuclear Information System (INIS)

    Costin, L.S.

    1997-10-01

    In this paper, the major elements of the site selection and characterization processes used in the US high level waste program are discussed. While much of the evolution of the site selection and characterization processes have been driven by the unique nature of the US program, these processes, which are well defined and documented, could be used as an initial basis for developing site screening, selection, and characterization programs in other countries. Thus, this paper focuses more on the process elements than the specific details of the US program

  18. Site selection and characterization processes for deep geologic disposal of high level nuclear waste

    International Nuclear Information System (INIS)

    Costin, L.S.

    1997-01-01

    In this paper, the major elements of the site selection and characterization processes used in the U. S. high level waste program are discussed. While much of the evolution of the site selection and characterization processes have been driven by the unique nature of the U. S. program, these processes, which are well-defined and documented, could be used as an initial basis for developing site screening, selection, and characterization programs in other countries. Thus, this paper focuses more on the process elements than the specific details of the U. S. program. (author). 3 refs., 2 tabs., 5 figs

  19. Site selection and characterization processes for deep geologic disposal of high level nuclear waste

    Energy Technology Data Exchange (ETDEWEB)

    Costin, L.S. [Sandia National Labs., Albuquerque, NM (United States)

    1997-12-31

    In this paper, the major elements of the site selection and characterization processes used in the U. S. high level waste program are discussed. While much of the evolution of the site selection and characterization processes have been driven by the unique nature of the U. S. program, these processes, which are well-defined and documented, could be used as an initial basis for developing site screening, selection, and characterization programs in other countries. Thus, this paper focuses more on the process elements than the specific details of the U. S. program. (author). 3 refs., 2 tabs., 5 figs.

  20. Parallelizing AT with MatlabMPI

    International Nuclear Information System (INIS)

    2011-01-01

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  1. Advanced High-Level Waste Glass Research and Development Plan

    Energy Technology Data Exchange (ETDEWEB)

    Peeler, David K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vienna, John D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schweiger, Michael J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fox, Kevin M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-07-01

    The U.S. Department of Energy Office of River Protection (ORP) has implemented an integrated program to increase the loading of Hanford tank wastes in glass while meeting melter lifetime expectancies and process, regulatory, and product quality requirements. The integrated ORP program is focused on providing a technical, science-based foundation from which key decisions can be made regarding the successful operation of the Hanford Tank Waste Treatment and Immobilization Plant (WTP) facilities. The fundamental data stemming from this program will support development of advanced glass formulations, key process control models, and tactical processing strategies to ensure safe and successful operations for both the low-activity waste (LAW) and high-level waste (HLW) vitrification facilities with an appreciation toward reducing overall mission life. The purpose of this advanced HLW glass research and development plan is to identify the near-, mid-, and longer-term research and development activities required to develop and validate advanced HLW glasses and their associated models to support facility operations at WTP, including both direct feed and full pretreatment flowsheets. This plan also integrates technical support of facility operations and waste qualification activities to show the interdependence of these activities with the advanced waste glass (AWG) program to support the full WTP mission. Figure ES-1 shows these key ORP programmatic activities and their interfaces with both WTP facility operations and qualification needs. The plan is a living document that will be updated to reflect key advancements and mission strategy changes. The research outlined here is motivated by the potential for substantial economic benefits (e.g., significant increases in waste throughput and reductions in glass volumes) that will be realized when advancements in glass formulation continue and models supporting facility operations are implemented. Developing and applying advanced

  2. EDDY RESOLVING NUTRIENT ECODYNAMICS IN THE GLOBAL PARALLEL OCEAN PROGRAM AND CONNECTIONS WITH TRACE GASES IN THE SULFUR, HALOGEN AND NMHC CYCLES

    Energy Technology Data Exchange (ETDEWEB)

    S. CHU; S. ELLIOTT

    2000-08-01

    Ecodynamics and the sea-air transfer of climate relevant trace gases are intimately coupled in the oceanic mixed layer. Ventilation of species such as dimethyl sulfide and methyl bromide constitutes a key linkage within the earth system. We are creating a research tool for the study of marine trace gas distributions by implementing coupled ecology-gas chemistry in the high resolution Parallel Ocean Program (POP). The fundamental circulation model is eddy resolving, with cell sizes averaging 0.15 degree (lat/long). Here we describe ecochemistry integration. Density dependent mortality and iron geochemistry have enhanced agreement with chlorophyll measurements. Indications are that dimethyl sulfide production rates must be adjusted for latitude dependence to match recent compilations. This may reflect the need for phytoplankton to conserve nitrogen by favoring sulfurous osmolytes. Global simulations are also available for carbonyl sulfide, the methyl halides and for nonmethane hydrocarbons. We discuss future applications including interaction with atmospheric chemistry models, high resolution biogeochemical snapshots and the study of open ocean fertilization.

  3. Process Design Concepts for Stabilization of High Level Waste Calcine

    Energy Technology Data Exchange (ETDEWEB)

    T. R. Thomas; A. K. Herbst

    2005-06-01

    The current baseline assumption is that packaging ¡§as is¡¨ and direct disposal of high level waste (HLW) calcine in a Monitored Geologic Repository will be allowed. The fall back position is to develop a stabilized waste form for the HLW calcine, that will meet repository waste acceptance criteria currently in place, in case regulatory initiatives are unsuccessful. A decision between direct disposal or a stabilization alternative is anticipated by June 2006. The purposes of this Engineering Design File (EDF) are to provide a pre-conceptual design on three low temperature processes under development for stabilization of high level waste calcine (i.e., the grout, hydroceramic grout, and iron phosphate ceramic processes) and to support a down selection among the three candidates. The key assumptions for the pre-conceptual design assessment are that a) a waste treatment plant would operate over eight years for 200 days a year, b) a design processing rate of 3.67 m3/day or 4670 kg/day of HLW calcine would be needed, and c) the performance of waste form would remove the HLW calcine from the hazardous waste category, and d) the waste form loadings would range from about 21-25 wt% calcine. The conclusions of this EDF study are that: (a) To date, the grout formulation appears to be the best candidate stabilizer among the three being tested for HLW calcine and appears to be the easiest to mix, pour, and cure. (b) Only minor differences would exist between the process steps of the grout and hydroceramic grout stabilization processes. If temperature control of the mixer at about 80„aC is required, it would add a major level of complexity to the iron phosphate stabilization process. (c) It is too early in the development program to determine which stabilizer will produce the minimum amount of stabilized waste form for the entire HLW inventory, but the volume is assumed to be within the range of 12,250 to 14,470 m3. (d) The stacked vessel height of the hot process vessels

  4. Immobilization of defense high-level waste: an assessment of technological strategies and potential regulatory goals. Volume II

    International Nuclear Information System (INIS)

    1979-06-01

    This volume contains the following appendices: selected immobilization processes, directory of selected European organizations involved in HLW management, U.S. high-level waste inventories, and selected European HLW program

  5. The scope and nature of the problem of high level nuclear waste disposal

    International Nuclear Information System (INIS)

    Jennekens, J.

    1981-09-01

    The disposal of high level nuclear waste poses a challenge to the Canadian technical and scientific communities, but a much greater challenge to government and industry leaders who must convince the public that the so-called 'problem' can be resolved by a pragmatic approach utilizing existing skills and knowledge. This paper outlines the objectives of radioactive waste management, the quantities of high level waste expected to be produced by the Canadian nuclear power program, the regulatory process which will apply and the government initiatives which have been and will be taken to ensure that the health, safety, security, and environmental interests of the public will be protected. (author)

  6. Evaluation of the FIR Example using Xilinx Vivado High-Level Synthesis Compiler

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-07-28

    Compared to central processing units (CPUs) and graphics processing units (GPUs), field programmable gate arrays (FPGAs) have major advantages in reconfigurability and performance achieved per watt. This development flow has been augmented with high-level synthesis (HLS) flow that can convert programs written in a high-level programming language to Hardware Description Language (HDL). Using high-level programming languages such as C, C++, and OpenCL for FPGA-based development could allow software developers, who have little FPGA knowledge, to take advantage of the FPGA-based application acceleration. This improves developer productivity and makes the FPGA-based acceleration accessible to hardware and software developers. Xilinx Vivado HLS compiler is a high-level synthesis tool that enables C, C++ and System C specification to be directly targeted into Xilinx FPGAs without the need to create RTL manually. The white paper [1] published recently by Xilinx uses a finite impulse response (FIR) example to demonstrate the variable-precision features in the Vivado HLS compiler and the resource and power benefits of converting floating point to fixed point for a design. To get a better understanding of variable-precision features in terms of resource usage and performance, this report presents the experimental results of evaluating the FIR example using Vivado HLS 2017.1 and a Kintex Ultrascale FPGA. In addition, we evaluated the half-precision floating-point data type against the double-precision and single-precision data type and present the detailed results.

  7. Argentine project for the final disposal of high-level radioactive wastes

    International Nuclear Information System (INIS)

    Palacios, E.; Ciallella, N.R.; Petraitis, E.J.

    1989-01-01

    From 1980 Argentina is carrying out a research program on the final disposal of high level radioactive wastes. The quantity of wastes produced will be significant in next century. However, it was decided to start with the studies well in advance in order to demonstrate that the high level wastes could be disposed in a safety way. The option of the direct disposal of irradiated fuel elements was discarded, not only by the energetic value of the plutonium, but also for ecological reasons. In fact, the presence of a total inventory of actinides in the non-processed fuel would imply a more important radiological impact than that caused if the plutonium is recycled to produce energy. The decision to solve the technological aspects connected with the elimination of high-level radioactive wastes well in advance, was made to avoid transfering the problem to future generations. This decision is based not only on technical evaluations but also on ethic premises. (Author)

  8. High-level waste processing at the Savannah River Site: An update

    International Nuclear Information System (INIS)

    Marra, J.E.; Bennett, W.M.; Elder, H.H.; Lee, E.D.; Marra, S.L.; Rutland, P.L.

    1997-01-01

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS) in Aiken, SC mg began immobilizing high-level radioactive waste in borosilicate glass in 1996. Currently, the radioactive glass is being produced as a ''sludge-only'' composition by combining washed high-level waste sludge with glass frit. The glass is poured in stainless steel canisters which will eventually be disposed of in a permanent, geological repository. To date, DWPF has produced about 100 canisters of vitrified waste. Future processing operations will, be based on a ''coupled'' feed of washed high-level waste sludge, precipitated cesium, and glass frit. This paper provides an update of the processing activities completed to date, operational/flowsheet problems encountered, and programs underway to increase production rates

  9. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  10. Techniques for the solidification of high-level wastes

    International Nuclear Information System (INIS)

    1977-01-01

    The problem of the long-term management of the high-level wastes from the reprocessing of irradiated nuclear fuel is receiving world-wide attention. While the majority of the waste solutions from the reprocessing of commercial fuels are currently being stored in stainless-steel tanks, increasing effort is being devoted to developing technology for the conversion of these wastes into solids. A number of full-scale solidification facilities are expected to come into operation in the next decade. The object of this report is to survey and compare all the work currently in progress on the techniques available for the solidification of high-level wastes. It will examine the high-level liquid wastes arising from the various processes currently under development or in operation, the advantages and disadvantages of each process for different types and quantities of waste solutions, the stages of development, the scale-up potential and flexibility of the processes

  11. Spent fuel and high-level radioactive waste storage

    International Nuclear Information System (INIS)

    Trigerman, S.

    1988-06-01

    The subject of spent fuel and high-level radioactive waste storage, is bibliographically reviewed. The review shows that in the majority of the countries, spent fuels and high-level radioactive wastes are planned to be stored for tens of years. Sites for final disposal of high-level radioactive wastes have not yet been found. A first final disposal facility is expected to come into operation in the United States of America by the year 2010. Other final disposal facilities are expected to come into operation in Germany, Sweden, Switzerland and Japan by the year 2020. Meanwhile , stress is placed upon the 'dry storage' method which is carried out successfully in a number of countries (Britain and France). In the United States of America spent fuels are stored in water pools while the 'dry storage' method is still being investigated. (Author)

  12. The ATLAS High-Level Calorimeter Trigger in Run-2

    CERN Document Server

    Wiglesworth, Craig; The ATLAS collaboration

    2018-01-01

    The ATLAS Experiment uses a two-level triggering system to identify and record collision events containing a wide variety of physics signatures. It reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of 1 kHz, whilst maintaining high efficiency for interesting collision events. It is composed of an initial hardware-based level-1 trigger followed by a software-based high-level trigger. A central component of the high-level trigger is the calorimeter trigger. This is responsible for processing data from the electromagnetic and hadronic calorimeters in order to identify electrons, photons, taus, jets and missing transverse energy. In this talk I will present the performance of the high-level calorimeter trigger in Run-2, noting the improvements that have been made in response to the challenges of operating at high luminosity.

  13. Production and properties of solidified high-level waste

    International Nuclear Information System (INIS)

    Brodersen, K.

    1980-08-01

    Available information on production and properties of solidified high-level waste are presented. The review includes literature up to the end of 1979. The feasibility of production of various types of solidified high-level wast is investigated. The main emphasis is on borosilicate glass but other options are also mentioned. The expected long-term behaviour of the materials are discussed on the basis of available results from laboratory experiments. Examples of the use of the information in safety analysis of disposal in salt formations are given. The work has been made on behalf of the Danish utilities investigation of the possibilities of disposal of high-level waste in salt domes in Jutland. (author)

  14. DOUBLE SHELL TANK INTEGRITY PROJECT HIGH LEVEL WASTE CHEMISTRY OPTIMIZATION

    International Nuclear Information System (INIS)

    WASHENFELDER DJ

    2008-01-01

    The U.S. Department of Energy's Office (DOE) of River Protection (ORP) has a continuing program for chemical optimization to better characterize corrosion behavior of High-Level Waste (HLW). The DOE controls the chemistry in its HLW to minimize the propensity of localized corrosion, such as pitting, and stress corrosion cracking (SCC) in nitrate-containing solutions. By improving the control of localized corrosion and SCC, the ORP can increase the life of the Double-Shell Tank (DST) carbon steel structural components and reduce overall mission costs. The carbon steel tanks at the Hanford Site are critical to the mission of safely managing stored HLW until it can be treated for disposal. The DOE has historically used additions of sodium hydroxide to retard corrosion processes in HLW tanks. This also increases the amount of waste to be treated. The reactions with carbon dioxide from the air and solid chemical species in the tank continually deplete the hydroxide ion concentration, which then requires continued additions. The DOE can reduce overall costs for caustic addition and treatment of waste, and more effectively utilize waste storage capacity by minimizing these chemical additions. Hydroxide addition is a means to control localized and stress corrosion cracking in carbon steel by providing a passive environment. The exact mechanism that causes nitrate to drive the corrosion process is not yet clear. The SCC is less of a concern in the newer stress relieved double shell tanks due to reduced residual stress. The optimization of waste chemistry will further reduce the propensity for SCC. The corrosion testing performed to optimize waste chemistry included cyclic potentiodynamic volarization studies. slow strain rate tests. and stress intensity factor/crack growth rate determinations. Laboratory experimental evidence suggests that nitrite is a highly effective:inhibitor for pitting and SCC in alkaline nitrate environments. Revision of the corrosion control

  15. A Skeleton Based Programming Paradigm for Mobile Multi-Agents on Distributed Systems and Its Realization within the MAGDA Mobile Agents Platform

    OpenAIRE

    R. Aversa; B. Di Martino; N. Mazzocca; S. Venticinque

    2008-01-01

    Parallel programming effort can be reduced by using high level constructs such as algorithmic skeletons. Within the MAGDA toolset, supporting programming and execution of mobile agent based distributed applications, we provide a skeleton-based parallel programming environment, based on specialization of Algorithmic Skeleton Java interfaces and classes. Their implementation include mobile agent features for execution on heterogeneous systems, such as clusters of WSs and PCs, and support reliab...

  16. High level radioactive waste management facility design criteria

    International Nuclear Information System (INIS)

    Sheikh, N.A.; Salaymeh, S.R.

    1993-01-01

    This paper discusses the engineering systems for the structural design of the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS). At the DWPF, high level radioactive liquids will be mixed with glass particles and heated in a melter. This molten glass will then be poured into stainless steel canisters where it will harden. This process will transform the high level waste into a more stable, manageable substance. This paper discuss the structural design requirements for this unique one of a kind facility. A special emphasis will be concentrated on the design criteria pertaining to earthquake, wind and tornado, and flooding

  17. Development of technical information database for high level waste disposal

    International Nuclear Information System (INIS)

    Kudo, Koji; Takada, Susumu; Kawanishi, Motoi

    2005-01-01

    A concept design of the high level waste disposal information database and the disposal technologies information database are explained. The high level waste disposal information database contains information on technologies, waste, management and rules, R and D, each step of disposal site selection, characteristics of sites, demonstration of disposal technology, design of disposal site, application for disposal permit, construction of disposal site, operation and closing. Construction of the disposal technologies information system and the geological disposal technologies information system is described. The screen image of the geological disposal technologies information system is shown. User is able to search the full text retrieval and attribute retrieval in the image. (S.Y. )

  18. High-Level Waste (HLW) Feed Process Control Strategy

    International Nuclear Information System (INIS)

    STAEHR, T.W.

    2000-01-01

    The primary purpose of this document is to describe the overall process control strategy for monitoring and controlling the functions associated with the Phase 1B high-level waste feed delivery. This document provides the basis for process monitoring and control functions and requirements needed throughput the double-shell tank system during Phase 1 high-level waste feed delivery. This document is intended to be used by (1) the developers of the future Process Control Plan and (2) the developers of the monitoring and control system

  19. Final report on cermet high-level waste forms

    International Nuclear Information System (INIS)

    Kobisk, E.H.; Quinby, T.C.; Aaron, W.S.

    1981-08-01

    Cermets are being developed as an alternate method for the fixation of defense and commercial high level radioactive waste in a terminal disposal form. Following initial feasibility assessments of this waste form, consisting of ceramic particles dispersed in an iron-nickel base alloy, significantly improved processing methods were developed. The characterization of cermets has continued through property determinations on samples prepared by various methods from a variety of simulated and actual high-level wastes. This report describes the status of development of the cermet waste form as it has evolved since 1977. 6 tables, 18 figures

  20. Managing the high level waste nuclear regulatory commission licensing process

    International Nuclear Information System (INIS)

    Baskin, K.P.

    1992-01-01

    This paper reports that the process for obtaining Nuclear Regulatory Commission permits for the high level waste storage facility is basically the same process commercial nuclear power plants followed to obtain construction permits and operating licenses for their facilities. Therefore, the experience from licensing commercial reactors can be applied to the high level waste facility. Proper management of the licensing process will be the key to the successful project. The management of the licensing process was categorized into four areas as follows: responsibility, organization, communication and documentation. Drawing on experience from nuclear power plant licensing and basic management principles, the management requirement for successfully accomplishing the project goals are discussed