WorldWideScience

Sample records for advanced scientific computing

  1. Advanced Scientific Computing Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  2. OPENING REMARKS: Scientific Discovery through Advanced Computing

    Science.gov (United States)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  3. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    Energy Technology Data Exchange (ETDEWEB)

    None

    2007-06-27

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  4. Advances in computing, and their impact on scientific computing.

    Science.gov (United States)

    Giles, Mike

    2002-01-01

    This paper begins by discussing the developments and trends in computer hardware, starting with the basic components (microprocessors, memory, disks, system interconnect, networking and visualization) before looking at complete systems (death of vector supercomputing, slow demise of large shared-memory systems, rapid growth in very large clusters of PCs). It then considers the software side, the relative maturity of shared-memory (OpenMP) and distributed-memory (MPI) programming environments, and new developments in 'grid computing'. Finally, it touches on the increasing importance of software packages in scientific computing, and the increased importance and difficulty of introducing good software engineering practices into very large academic software development projects. PMID:12539947

  5. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    Energy Technology Data Exchange (ETDEWEB)

    Hey, Tony [eScience Institute, University of Washington; Agarwal, Deborah [Lawrence Berkeley National Laboratory; Borgman, Christine [University of California, Los Angeles; Cartaro, Concetta [SLAC National Accelerator Laboratory; Crivelli, Silvia [Lawrence Berkeley National Laboratory; Van Dam, Kerstin Kleese [Pacific Northwest National Laboratory; Luce, Richard [University of Oklahoma; Arjun, Shankar [CADES, Oak Ridge National Laboratory; Trefethen, Anne [University of Oxford; Wade, Alex [Microsoft Research, Microsoft Corporation; Williams, Dean [Lawrence Livermore National Laboratory

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  6. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    Energy Technology Data Exchange (ETDEWEB)

    Reed, Daniel [University of Iowa; Berzins, Martin [University of Utah; Pennington, Robert; Sarkar, Vivek [Rice University; Taylor, Valerie [Texas A& M University

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  7. 77 FR 45345 - DOE/Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2012-07-31

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY DOE... at (301) 903-7486 or email at: Melea.Baker@science.doe.gov . You must make your request for an oral... Computing Web site ( www.sc.doe.gov/ascr ) for viewing. Issued at Washington, DC, on July 25, 2012....

  8. 77 FR 62231 - DOE/Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2012-10-12

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY DOE.... Computational Science Graduate Fellowship (CSGF) Longitudinal Study. Update on Exascale. Update from DOE data... contact Melea Baker, (301) 903-7486 or by email at: Melea.Baker@science.doe.gov . You must make...

  9. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    CERN Document Server

    Nam, H; Nazarewicz, W; Bulgac, A; Hagen, G; Kortelainen, M; Maris, P; Pei, J C; Roche, K J; Schunck, N; Thompson, I; Vary, J P; Wild, S M

    2012-01-01

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multidisciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadershi...

  10. National facility for advanced computational science: A sustainable path to scientific discovery

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  11. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    Energy Technology Data Exchange (ETDEWEB)

    Lucas, Robert [University of Southern California, Information Sciences Institute; Ang, James [Sandia National Laboratories; Bergman, Keren [Columbia University; Borkar, Shekhar [Intel; Carlson, William [Institute for Defense Analyses; Carrington, Laura [University of California, San Diego; Chiu, George [IBM; Colwell, Robert [DARPA; Dally, William [NVIDIA; Dongarra, Jack [University of Tennessee; Geist, Al [Oak Ridge National Laboratory; Haring, Rud [IBM; Hittinger, Jeffrey [Lawrence Livermore National Laboratory; Hoisie, Adolfy [Pacific Northwest National Laboratory; Klein, Dean Micron; Kogge, Peter [University of Notre Dame; Lethin, Richard [Reservoir Labs; Sarkar, Vivek [Rice University; Schreiber, Robert [Hewlett Packard; Shalf, John [Lawrence Berkeley National Laboratory; Sterling, Thomas [Indiana University; Stevens, Rick [Argonne National Laboratory; Bashor, Jon [Lawrence Berkeley National Laboratory; Brightwell, Ron [Sandia National Laboratories; Coteus, Paul [IBM; Debenedictus, Erik [Sandia National Laboratories; Hiller, Jon [Science and Technology Associates; Kim, K. H. [IBM; Langston, Harper [Reservoir Labs; Murphy, Richard Micron; Webster, Clayton [Oak Ridge National Laboratory; Wild, Stefan [Argonne National Laboratory; Grider, Gary [Los Alamos National Laboratory; Ross, Rob [Argonne National Laboratory; Leyffer, Sven [Argonne National Laboratory; Laros III, James [Sandia National Laboratories

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  12. Practical scientific computing

    CERN Document Server

    Muhammad, A

    2011-01-01

    Scientific computing is about developing mathematical models, numerical methods and computer implementations to study and solve real problems in science, engineering, business and even social sciences. Mathematical modelling requires deep understanding of classical numerical methods. This essential guide provides the reader with sufficient foundations in these areas to venture into more advanced texts. The first section of the book presents numEclipse, an open source tool for numerical computing based on the notion of MATLAB®. numEclipse is implemented as a plug-in for Eclipse, a leading integ

  13. Data management, code deployment, and scientific visualization to enhance scientific discovery in fusion research through advanced computing

    International Nuclear Information System (INIS)

    The long-term vision of the Fusion Collaboratory described in this paper is to transform fusion research and accelerate scientific understanding and innovation so as to revolutionize the design of a fusion energy source. The Collaboratory will create and deploy collaborative software tools that will enable more efficient utilization of existing experimental facilities and more effective integration of experiment, theory, and modeling. The computer science research necessary to create the Collaboratory is centered on three activities: security, remote and distributed computing, and scientific visualization. It is anticipated that the presently envisioned Fusion Collaboratory software tools will require 3 years to complete

  14. UNEDF: Advanced Scientific Computing Transforms the Low-Energy Nuclear Many-Body Problem

    CERN Document Server

    Stoitsov, M; Nazarewicz, W; Bulgac, A; Hagen, G; Kortelainen, M; Pei, J C; Roche, K J; Schunck, N; Thompson, I; Vary, J P; Wild, S M

    2011-01-01

    The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper illustrates significant milestones accomplished by UNEDF through integration of the theoretical approaches, advanced numerical algorithms, and leadership class computational resources.

  15. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Forest M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bochev, Pavel B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cameron-Smith, Philip J.. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Easter, Richard C [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Scott M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ghan, Steven J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Xiaohong [Univ. of Wyoming, Laramie, WY (United States); Lowrie, Robert B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lucas, Donald D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ma, Po-lun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sacks, William J. [National Center for Atmospheric Research (NCAR), Boulder, CO (United States); Shrivastava, Manish [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Singh, Balwinder [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tautges, Timothy J. [Argonne National Lab. (ANL), Argonne, IL (United States); Taylor, Mark A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Vertenstein, Mariana [National Center for Atmospheric Research (NCAR), Boulder, CO (United States); Worley, Patrick H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-01-15

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  16. Scientific computer simulation review

    International Nuclear Information System (INIS)

    Before the results of a scientific computer simulation are used for any purpose, it should be determined if those results can be trusted. Answering that question of trust is the domain of scientific computer simulation review. There is limited literature that focuses on simulation review, and most is specific to the review of a particular type of simulation. This work is intended to provide a foundation for a common understanding of simulation review. This is accomplished through three contributions. First, scientific computer simulation review is formally defined. This definition identifies the scope of simulation review and provides the boundaries of the review process. Second, maturity assessment theory is developed. This development clarifies the concepts of maturity criteria, maturity assessment sets, and maturity assessment frameworks, which are essential for performing simulation review. Finally, simulation review is described as the application of a maturity assessment framework. This is illustrated through evaluating a simulation review performed by the U.S. Nuclear Regulatory Commission. In making these contributions, this work provides a means for a more objective assessment of a simulation’s trustworthiness and takes the next step in establishing scientific computer simulation review as its own field. - Highlights: • We define scientific computer simulation review. • We develop maturity assessment theory. • We formally define a maturity assessment framework. • We describe simulation review as the application of a maturity framework. • We provide an example of a simulation review using a maturity framework

  17. US Scientific Discory through Advanced Computing (SciDAC) Program & Fusion Energy Science

    Institute of Scientific and Technical Information of China (English)

    W. Tang

    2007-01-01

    @@ The development of a secure and reliable energy system that is environmentally and economically sustainable is a truly formidable scientific and technological challenge facing the world in the twenty-first century. This demands basic scientific understanding that can enable the innovations to make fusion energy practical.

  18. Combinatorial scientific computing

    CERN Document Server

    Naumann, Uwe

    2012-01-01

    Combinatorial Scientific Computing explores the latest research on creating algorithms and software tools to solve key combinatorial problems on large-scale high-performance computing architectures. It includes contributions from international researchers who are pioneers in designing software and applications for high-performance computing systems. The book offers a state-of-the-art overview of the latest research, tool development, and applications. It focuses on load balancing and parallelization on high-performance computers, large-scale optimization, algorithmic differentiation of numeric

  19. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex

  20. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    International Nuclear Information System (INIS)

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  1. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    Energy Technology Data Exchange (ETDEWEB)

    Saffer, Shelley (Sam) I.

    2014-12-01

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  2. Wavelets in scientific computing

    DEFF Research Database (Denmark)

    Nielsen, Ole Møller

    1998-01-01

    Wavelet analysis is a relatively new mathematical discipline which has generated much interest in both theoretical and applied mathematics over the past decade. Crucial to wavelets are their ability to analyze different parts of a function at different scales and the fact that they can represent...... polynomials up to a certain order exactly. As a consequence, functions with fast oscillations, or even discontinuities, in localized regions may be approximated well by a linear combination of relatively few wavelets. In comparison, a Fourier expansion must use many basis functions to approximate such a...... function well. These properties of wavelets have lead to some very successful applications within the field of signal processing. This dissertation revolves around the role of wavelets in scientific computing and it falls into three parts: Part I gives an exposition of the theory of orthogonal, compactly...

  3. Parallel processing for scientific computations

    Science.gov (United States)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  4. Advanced Excel for scientific data analysis

    CERN Document Server

    De Levie, Robert

    2004-01-01

    Excel is by far the most widely distributed data analysis software but few users are aware of its full powers. Advanced Excel For Scientific Data Analysis takes off from where most books dealing with scientific applications of Excel end. It focuses on three areas-least squares, Fourier transformation, and digital simulation-and illustrates these with extensive examples, often taken from the literature. It also includes and describes a number of sample macros and functions to facilitate common data analysis tasks. These macros and functions are provided in uncompiled, computer-readable, easily

  5. Programming Languages for Scientific Computing

    OpenAIRE

    Knepley, Matthew G.

    2012-01-01

    Scientific computation is a discipline that combines numerical analysis, physical understanding, algorithm development, and structured programming. Several yottacycles per year on the world's largest computers are spent simulating problems as diverse as weather prediction, the properties of material composites, the behavior of biomolecules in solution, and the quantum nature of chemical compounds. This article is intended to review specfic languages features and their use in computational sci...

  6. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  7. Advanced computations in plasma physics

    International Nuclear Information System (INIS)

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to

  8. Accelerating Scientific Computations using FPGAs

    Science.gov (United States)

    Pell, O.; Atasu, K.; Mencer, O.

    Field Programmable Gate Arrays (FPGAs) are semiconductor devices that contain a grid of programmable cells, which the user configures to implement any digital circuit of up to a few million gates. Modern FPGAs allow the user to reconfigure these circuits many times each second, making FPGAs fully programmable and general purpose. Recent FPGA technology provides sufficient resources to tackle scientific applications on large-scale parallel systems. As a case study, we implement the Fast Fourier Transform [1] in a flexible floating point implementation. We utilize A Stream Compiler [2] (ASC) which combines C++ syntax with flexible floating point support by providing a 'HWfloat' data-type. The resulting FFT can be targeted to a variety of FPGA platforms in FFTW-style, though not yet completely automatically. The resulting FFT circuit can be adapted to the particular resources available on the system. The optimal implementation of an FFT accelerator depends on the length and dimensionality of the FFT, the available FPGA area, the available hard DSP blocks, the FPGA board architecture, and the precision and range of the application [3]. Software-style object-orientated abstractions allow us to pursue an accelerated pace of development by maximizing re-use of design patterns. ASC allows a few core hardware descriptions to generate hundreds of different circuit variants to meet particular speed, area and precision goals. The key to achieving maximum acceleration of FFT computation is to match memory and compute bandwidths so that maximum use is made of computational resources. Modern FPGAs contain up to hundreds of independent SRAM banks to store intermediate results, providing ample scope for optimizing memory parallelism. At 175Mhz, one of Maxeler's Radix-4 FFT cores computes 4x as many 1024pt FFTs per second as a dual Pentium-IV Xeon machine running FFTW. Eight such parallel cores fit onto the largest FPGA in the Xilinx Virtex-4 family, providing a 32x speed-up over

  9. Advances in computers

    CERN Document Server

    Memon, Atif

    2012-01-01

    Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with m

  10. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  11. Advances in Computers

    CERN Document Server

    Zelkowitz, Marvin

    2010-01-01

    This is volume 79 of Advances in Computers. This series, which began publication in 1960, is the oldest continuously published anthology that chronicles the ever- changing information technology field. In these volumes we publish from 5 to 7 chapters, three times per year, that cover the latest changes to the design, development, use and implications of computer technology on society today. Covers the full breadth of innovations in hardware, software, theory, design, and applications.Many of the in-depth reviews have become standard references that co

  12. Mastering scientific computing with R

    CERN Document Server

    Gerrard, Paul

    2015-01-01

    If you want to learn how to quantitatively answer scientific questions for practical purposes using the powerful R language and the open source R tool ecosystem, this book is ideal for you. It is ideally suited for scientists who understand scientific concepts, know a little R, and want to be able to start applying R to be able to answer empirical scientific questions. Some R exposure is helpful, but not compulsory.

  13. A Desktop Grid Computing Approach for Scientific Computing and Visualization

    OpenAIRE

    Constantinescu-Fuløp, Zoran

    2008-01-01

    Scientific Computing is the collection of tools, techniques, and theories required to solve on a computer, mathematical models of problems from science and engineering, and its main goal is to gain insight in such problems. Generally, it is difficult to understand or communicate information from complex or large datasets generated by Scientific Computing methods and techniques (computational simulations, complex experiments, observational instruments etc.). Therefore, support of Scientific Vi...

  14. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  15. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  16. International Conference on Advanced Computing for Innovation

    CERN Document Server

    Angelova, Galia; Agre, Gennady

    2016-01-01

    This volume is a selected collection of papers presented and discussed at the International Conference “Advanced Computing for Innovation (AComIn 2015)”. The Conference was held at 10th -11th of November, 2015 in Sofia, Bulgaria and was aimed at providing a forum for international scientific exchange between Central/Eastern Europe and the rest of the world on several fundamental topics of computational intelligence. The papers report innovative approaches and solutions in hot topics of computational intelligence – advanced computing, language and semantic technologies, signal and image processing, as well as optimization and intelligent control.

  17. Compute Canada: Advancing Computational Research

    International Nuclear Information System (INIS)

    High Performance Computing (HPC) is redefining the way that research is done. Compute Canada's HPC infrastructure provides a national platform that enables Canadian researchers to compete on an international scale, attracts top talent to Canadian universities and broadens the scope of research.

  18. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  19. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    International Nuclear Information System (INIS)

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics

  20. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    OpenAIRE

    Hules, John A

    2009-01-01

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics.

  1. Scientific Research in Computer Sciences

    Directory of Open Access Journals (Sweden)

    Arwa al-Yasiry

    2007-09-01

    Full Text Available This paper displays the importance of selection research objective and supervisor; In addition this paper suggested the optimal research methods that help researcher to get to optimal results in efficient way. This paper shows the thesis writing style and arrangement in way that to be readable for reader about reality of type and size of the work. The one important result of this paper it's the successful of scientific research must depend about many features that join together and miss one of the research methods that mean unsuccessful research.

  2. Computer-Assisted Scientific Workflow Design

    OpenAIRE

    Cerezo N.; Montagnat J.; Blay-Fornarino M.

    2013-01-01

    Workflows are increasingly adopted to describe large-scale data- and compute-intensive processes that can take advantage of today's Distributed Computing Infrastructures. Still, most Scientific Workflow formalisms are notoriously difficult to fully exploit, as they entangle the description of scientific processes and their implementation, blurring the lines between what is done and how it is done as well as between what is and what is not infrastructure-dependent. This work addresses the prob...

  3. Metadata Management in Scientific Computing

    CERN Document Server

    Seidel, Eric L

    2012-01-01

    Complex scientific codes and the datasets they generate are in need of a sophisticated categorization environment that allows the community to store, search, and enhance metadata in an open, dynamic system. Currently, data is often presented in a read-only format, distilled and curated by a select group of researchers. We envision a more open and dynamic system, where authors can publish their data in a writeable format, allowing users to annotate the datasets with their own comments and data. This would enable the scientific community to collaborate on a higher level than before, where researchers could for example annotate a published dataset with their citations. Such a system would require a complete set of permissions to ensure that any individual's data cannot be altered by others unless they specifically allow it. For this reason datasets and codes are generally presented read-only, to protect the author's data; however, this also prevents the type of social revolutions that the private sector has seen...

  4. InSAR Scientific Computing Environment

    Science.gov (United States)

    Gurrola, E. M.; Rosen, P. A.; Sacco, G.; Zebker, H. A.; Simons, M.; Sandwell, D. T.

    2010-12-01

    The InSAR Scientific Computing Environment (ISCE) is a software development effort in its second year within the NASA Advanced Information Systems and Technology program. The ISCE will provide a new computing environment for geodetic image processing for InSAR sensors that will enable scientists to reduce measurements directly from radar satellites and aircraft to new geophysical products without first requiring them to develop detailed expertise in radar processing methods. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. The NRC Decadal Survey-recommended DESDynI mission will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment is planned to become a key element in processing DESDynI data into higher level data products and it is expected to enable a new class of analyses that take greater advantage of the long time and large spatial scales of these new data, than current approaches. At the core of ISCE is both legacy processing software from the JPL/Caltech ROI_PAC repeat-pass interferometry package as well as a new InSAR processing package containing more efficient and more accurate processing algorithms being developed at Stanford for this project that is based on experience gained in developing processors for missions such as SRTM and UAVSAR. Around the core InSAR processing programs we are building object-oriented wrappers to enable their incorporation into a more modern, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models, and a robust, intuitive user interface with

  5. Numerical and symbolic scientific computing

    CERN Document Server

    Langer, Ulrich

    2011-01-01

    The book presents the state of the art and results and also includes articles pointing to future developments. Most of the articles center around the theme of linear partial differential equations. Major aspects are fast solvers in elastoplasticity, symbolic analysis for boundary problems, symbolic treatment of operators, computer algebra, and finite element methods, a symbolic approach to finite difference schemes, cylindrical algebraic decomposition and local Fourier analysis, and white noise analysis for stochastic partial differential equations. Further numerical-symbolic topics range from

  6. High-performance scientific computing

    CERN Document Server

    Berry, Michael W; Gallopoulos, Efstratios

    2012-01-01

    This book presents the state of the art in parallel numerical algorithms, applications, architectures, and system software. The book examines various solutions for issues of concurrency, scale, energy efficiency, and programmability, which are discussed in the context of a diverse range of applications. Features: includes contributions from an international selection of world-class authorities; examines parallel algorithm-architecture interaction through issues of computational capacity-based codesign and automatic restructuring of programs using compilation techniques; reviews emerging applic

  7. Advances and Challenges in Computational Plasma Science

    Energy Technology Data Exchange (ETDEWEB)

    W.M. Tang; V.S. Chan

    2005-01-03

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology.

  8. Advances and Challenges in Computational Plasma Science

    International Nuclear Information System (INIS)

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology

  9. Learning SciPy for numerical and scientific computing

    CERN Document Server

    Silva

    2013-01-01

    A step-by-step practical tutorial with plenty of examples on research-based problems from various areas of science, that prove how simple, yet effective, it is to provide solutions based on SciPy. This book is targeted at anyone with basic knowledge of Python, a somewhat advanced command of mathematics/physics, and an interest in engineering or scientific applications---this is broadly what we refer to as scientific computing.This book will be of critical importance to programmers and scientists who have basic Python knowledge and would like to be able to do scientific and numerical computatio

  10. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  11. Advances in physiological computing

    CERN Document Server

    Fairclough, Stephen H

    2014-01-01

    This edited collection will provide an overview of the field of physiological computing, i.e. the use of physiological signals as input for computer control. It will cover a breadth of current research, from brain-computer interfaces to telemedicine.

  12. Pascal-SC a computer language for scientific computation

    CERN Document Server

    Bohlender, Gerd; von Gudenberg, Jürgen Wolff; Rheinboldt, Werner; Siewiorek, Daniel

    1987-01-01

    Perspectives in Computing, Vol. 17: Pascal-SC: A Computer Language for Scientific Computation focuses on the application of Pascal-SC, a programming language developed as an extension of standard Pascal, in scientific computation. The publication first elaborates on the introduction to Pascal-SC, a review of standard Pascal, and real floating-point arithmetic. Discussions focus on optimal scalar product, standard functions, real expressions, program structure, simple extensions, real floating-point arithmetic, vector and matrix arithmetic, and dynamic arrays. The text then examines functions a

  13. Software Defects, Scientific Computation and the Scientific Method

    CERN Document Server

    CERN. Geneva

    2011-01-01

    Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...

  14. Advances and challenges in computational plasma science

    International Nuclear Information System (INIS)

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

  15. Computation as a Scientific Weltanschauung (Invited Talk)

    OpenAIRE

    Papadimitriou, Christos H.

    2016-01-01

    Computation as a mechanical reality is young - almost exactly seventy years of age - and yet the spirit of computation can be traced several millennia back. Any moderately advanced civilization depends on calculation (for inventory, taxation, navigation, land partition, among many others) - our civilization is the first one that is conscious of this reliance. Computation has also been central to science for centuries. This is most immediately apparent in the case of mathematics: the id...

  16. The 9-th INS scientific computational programs

    International Nuclear Information System (INIS)

    Twelve groups were accepted as the 9-th INS Scientific Computational Programs (ISCP). This ISCP can use full resources of the INS central computer without an operational limitation. The 9-th ISCP started at June/1/92 and ended at March/30/93. Some results were already published in journals or appeared in physical meetings. This is a sum of the used statistics of the 9-th ISCP and reports presented by groups performed the 9-th ISCP. (author)

  17. Exploring HPCS languages in scientific computing

    International Nuclear Information System (INIS)

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else

  18. CRISP (COMPUTER RETRIEVED INFORMATION ON SCIENTIFIC PROJECTS)

    Science.gov (United States)

    CRISP (Computer Retrieval of Information on Scientific Projects) is a biomedical database system containing information on research projects and programs supported by the Department of Health and Human Services. Most of the research falls within the broad category of extramural p...

  19. PARA'04, State-of-the-art in scientific computing

    DEFF Research Database (Denmark)

    Madsen, Kaj; Wasniewski, Jerzy

    This meeting in the series, the PARA'04 Workshop with the title ``State of the Art in Scientific Computing'', was held in Lyngby, Denmark, June 20-23, 2004. The PARA'04 Workshop was organized by Jack Dongarra from the University of Tennessee and Oak Ridge National Laboratory, and Kaj Madsen and...... Jerzy Wa{\\$\\backslash\\$'{s}}niewski from the Technical University of Denmark. The emphasis here was shifted to High-Performance Computing (HPC). The ongoing development of ever more advanced computers provides the potential for solving increasingly difficult computational problems. However, given the...... complexity of modern computer architectures, the task of realizing this potential needs careful attention. For example, the failure to exploit a computer's memory hierarchy can degrade performance badly. A main concern of HPC is the development of software that optimizes the performance of a given computer...

  20. Scientific Computing Using Consumer Video-Gaming Hardware Devices

    CERN Document Server

    Volkema, Glenn

    2016-01-01

    Commodity video-gaming hardware (consoles, graphics cards, tablets, etc.) performance has been advancing at a rapid pace owing to strong consumer demand and stiff market competition. Gaming hardware devices are currently amongst the most powerful and cost-effective computational technologies available in quantity. In this article, we evaluate a sample of current generation video-gaming hardware devices for scientific computing and compare their performance with specialized supercomputing general purpose graphics processing units (GPGPUs). We use the OpenCL SHOC benchmark suite, which is a measure of the performance of compute hardware on various different scientific application kernels, and also a popular public distributed computing application, Einstein@Home in the field of gravitational physics for the purposes of this evaluation.

  1. Advanced Test Reactor National Scientific User Facility

    Energy Technology Data Exchange (ETDEWEB)

    Frances M. Marshall; Jeff Benson; Mary Catherine Thelen

    2011-08-01

    The Advanced Test Reactor (ATR), at the Idaho National Laboratory (INL), is a large test reactor for providing the capability for studying the effects of intense neutron and gamma radiation on reactor materials and fuels. The ATR is a pressurized, light-water, high flux test reactor with a maximum operating power of 250 MWth. The INL also has several hot cells and other laboratories in which irradiated material can be examined to study material irradiation effects. In 2007 the US Department of Energy (DOE) designated the ATR as a National Scientific User Facility (NSUF) to facilitate greater access to the ATR and the associated INL laboratories for material testing research by a broader user community. This paper highlights the ATR NSUF research program and the associated educational initiatives.

  2. Advanced pixel architectures for scientific image sensors

    CERN Document Server

    Coath, R; Godbeer, A; Wilson, M; Turchetta, R

    2009-01-01

    We present recent developments from two projects targeting advanced pixel architectures for scientific applications. Results are reported from FORTIS, a sensor demonstrating variants on a 4T pixel architecture. The variants include differences in pixel and diode size, the in-pixel source follower transistor size and the capacitance of the readout node to optimise for low noise and sensitivity to small amounts of charge. Results are also reported from TPAC, a complex pixel architecture with ~160 transistors per pixel. Both sensors were manufactured in the 0.18μm INMAPS process, which includes a special deep p-well layer and fabrication on a high resistivity epitaxial layer for improved charge collection efficiency.

  3. Advanced Test Reactor National Scientific User Facility

    International Nuclear Information System (INIS)

    The Advanced Test Reactor (ATR), at the Idaho National Laboratory (INL), is a large test reactor for providing the capability for studying the effects of intense neutron and gamma radiation on reactor materials and fuels. The ATR is a pressurized, light-water, high flux test reactor with a maximum operating power of 250 MWth. The INL also has several hot cells and other laboratories in which irradiated material can be examined to study material irradiation effects. In 2007 the US Department of Energy (DOE) designated the ATR as a National Scientific User Facility (NSUF) to facilitate greater access to the ATR and the associated INL laboratories for material testing research by a broader user community. This paper highlights the ATR NSUF research program and the associated educational initiatives.

  4. Institute for Scientific Computing Research Annual Report: Fiscal Year 2004

    Energy Technology Data Exchange (ETDEWEB)

    Keyes, D E

    2005-02-07

    Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technology enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science

  5. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  6. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    Science.gov (United States)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires

  7. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  8. Recent Advances in Evolutionary Computation

    Institute of Scientific and Technical Information of China (English)

    Xin Yao; Yong Xu

    2006-01-01

    Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of "biological evolution" toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily "evolutionary". This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.

  9. Scientific Visualization and Computational Science: Natural Partners

    Science.gov (United States)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization

  10. Scientifically advanced solutions for chestnut ink disease.

    Science.gov (United States)

    Choupina, Altino Branco; Estevinho, Letícia; Martins, Ivone M

    2014-05-01

    On the north regions of Portugal and Spain, the Castanea sativa Mill. culture is extremely important. The biggest productivity and yield break occurs due to the ink disease, the causal agent being the oomycete Phytophthora cinnamomi. This oomycete is also responsible for the decline of many other plant species in Europe and worldwide. P. cinnamomi and Phytophthora cambivora are considered, by the generality of the authors, as the C. sativa ink disease causal agents. Most Phytophthora species secrete large amounts of elicitins, a group of unique highly conserved proteins that are able to induce hypersensitive response (HR) and enhances plant defense responses in a systemic acquired resistance (SAR) manner against infection by different pathogens. Some other proteins involved in mechanisms of infection by P. cinnamomi were identified by our group: endo-1,3-beta-glucanase (complete cds); exo-glucanase (partial cds) responsible by adhesion, penetration, and colonization of host tissues; glucanase inhibitor protein (GIP) (complete cds) responsible by the suppression of host defense responses; necrosis-inducing Phytophthora protein 1 (NPP1) (partial cds); and transglutaminase (partial cds) which inducts defense responses and disease-like symptoms. In this mini-review, we present some scientifically advanced solutions that can contribute to the resolution of ink disease. PMID:24622889

  11. Enabling Computational Technologies for Terascale Scientific Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, S.F.

    2000-08-24

    We develop scalable algorithms and object-oriented code frameworks for terascale scientific simulations on massively parallel processors (MPPs). Our research in multigrid-based linear solvers and adaptive mesh refinement enables Laboratory programs to use MPPs to explore important physical phenomena. For example, our research aids stockpile stewardship by making practical detailed 3D simulations of radiation transport. The need to solve large linear systems arises in many applications, including radiation transport, structural dynamics, combustion, and flow in porous media. These systems result from discretizations of partial differential equations on computational meshes. Our first research objective is to develop multigrid preconditioned iterative methods for such problems and to demonstrate their scalability on MPPs. Scalability describes how total computational work grows with problem size; it measures how effectively additional resources can help solve increasingly larger problems. Many factors contribute to scalability: computer architecture, parallel implementation, and choice of algorithm. Scalable algorithms have been shown to decrease simulation times by several orders of magnitude.

  12. Scientific computing with MATLAB and Octave

    CERN Document Server

    Quarteroni, Alfio; Gervasio, Paola

    2014-01-01

    This textbook is an introduction to Scientific Computing, in which several numerical methods for the computer-based solution of certain classes of mathematical problems are illustrated. The authors show how to compute the zeros, the extrema, and the integrals of continuous functions, solve linear systems, approximate functions using polynomials and construct accurate approximations for the solution of ordinary and partial differential equations. To make the format concrete and appealing, the programming environments Matlab and Octave are adopted as faithful companions. The book contains the solutions to several problems posed in exercises and examples, often originating from important applications. At the end of each chapter, a specific section is devoted to subjects which were not addressed in the book and contains bibliographical references for a more comprehensive treatment of the material. From the review: ".... This carefully written textbook, the third English edition, contains substantial new developme...

  13. Computational photography: advances and challenges

    OpenAIRE

    Lam, EYM

    2011-01-01

    In the mid-1990s when digital photography began to enter the consumer market, Professor Joseph Goodman and I set out to explore how computation would impact the imaging system design. The field of study has since grown to be known as computational photography. In this paper I'll describe some of its recent advances and challenges, and discuss what the future holds. © 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

  14. Designing Scientific Software for Heterogeneous Computing

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig

    , algorithms and data structures must be designed to utilize the underlying parallel architecture. The architectural changes in hardware design within the last decade, from single to multi and many-core architectures, require software developers to identify and properly implement methods that both exploit...... concurrency and maintain numerical efficiency. Graphical Processing Units (GPUs) have proven to be very e_ective units for computing the solution of scientific problems described by partial differential equations (PDEs). GPUs have today become standard devices in portable, desktop, and supercomputers, which...... makes parallel software design applicable, but also a challenge for scientific software developers at all levels. We have developed a generic C++ library for fast prototyping of large-scale PDEs solvers based on flexible-order finite difference approximations on structured regular grids. The library...

  15. Scientific Computing in the CH Programming Language

    Directory of Open Access Journals (Sweden)

    Harry H. Cheng

    1993-01-01

    Full Text Available We have developed a general-purpose block-structured interpretive programming Ianguage. The syntax and semantics of this language called CH are similar to C. CH retains most features of C from the scientific computing point of view. In this paper, the extension of C to CH for numerical computation of real numbers will be described. Metanumbers of −0.0, 0.0, Inf, −Inf, and NaN are introduced in CH. Through these metanumbers, the power of the IEEE 754 arithmetic standard is easily available to the programmer. These metanumbers are extended to commonly used mathematical functions in the spirit of the IEEE 754 standard and ANSI C. The definitions for manipulation of these metanumbers in I/O; arithmetic, relational, and logic operations; and built-in polymorphic mathematical functions are defined. The capabilities of bitwise, assignment, address and indirection, increment and decrement, as well as type conversion operations in ANSI C are extended in CH. In this paper, mainly new linguistic features of CH in comparison to C will be described. Example programs programmed in CH with metanumbers and polymorphic mathematical functions will demonstrate capabilities of CH in scientific computing.

  16. Educational NASA Computational and Scientific Studies (enCOMPASS)

    Science.gov (United States)

    Memarsadeghi, Nargess

    2013-01-01

    Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and

  17. PISCES: An environment for parallel scientific computation

    Science.gov (United States)

    Pratt, T. W.

    1985-01-01

    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.

  18. Recent advances in computational optimization

    CERN Document Server

    2013-01-01

    Optimization is part of our everyday life. We try to organize our work in a better way and optimization occurs in minimizing time and cost or the maximization of the profit, quality and efficiency. Also many real world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization. This book presents recent advances in computational optimization. The volume includes important real world problems like parameter settings for con- trolling processes in bioreactor, robot skin wiring, strip packing, project scheduling, tuning of PID controller and so on. Some of them can be solved by applying traditional numerical methods, but others need a huge amount of computational resources. For them it is shown that is appropriate to develop algorithms based on metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming etc...

  19. Blueprint and First Experiences Bridging Hardware Virtualization and Global Grids for Advanced Scientific Computing: Designing and Building a Global Edge Services Framework (ESF) for OSG, EGEE, and LCG

    CERN Document Server

    Rana, A S; Vaniachine, A; Wurthwein, F; Foster, I; Sotomayor, B; Freeman, T

    2006-01-01

    We report on first experiences with building and operating an edge services framework (ESF) based on Xen virtual machines instantiated via the workspace service in Globus toolkit, and developed as a joint project between EGEE, LCG, and OSG. Many computing facilities are architected with their compute and storage clusters behind firewalls. Edge services (ES) are instantiated on a small set of gateways to provide access to these clusters via standard grid interfaces. Experience on EGEE, LCG, and OSG has shown that at least two issues are of critical importance when designing an infrastructure in support of ES. The first concerns ES configuration. It is impractical to assume that each virtual organization (VO) using a facility will employ the same ES configuration, or that different configurations will coexist easily. Even within a VO, it should be possible to run different versions of the same ES simultaneously. The second issue concerns resource allocation: it is essential that an ESF be able to effectively gu...

  20. International Conference on Advanced Computing

    CERN Document Server

    Patnaik, Srikanta

    2014-01-01

    This book is composed of the Proceedings of the International Conference on Advanced Computing, Networking, and Informatics (ICACNI 2013), held at Central Institute of Technology, Raipur, Chhattisgarh, India during June 14–16, 2013. The book records current research articles in the domain of computing, networking, and informatics. The book presents original research articles, case-studies, as well as review articles in the said field of study with emphasis on their implementation and practical application. Researchers, academicians, practitioners, and industry policy makers around the globe have contributed towards formation of this book with their valuable research submissions.

  1. Final Report for "Center for Technology for Advanced Scientific Component Software"

    Energy Technology Data Exchange (ETDEWEB)

    Svetlana Shasharina

    2010-12-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  2. Final Report for 'Center for Technology for Advanced Scientific Component Software'

    International Nuclear Information System (INIS)

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  3. Institute for Scientific Computing Research Fiscal Year 2002 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Keyes, D E; McGraw, J R; Bodtker, L K

    2003-03-11

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory is jointly administered by the Computing Applications and Research Department (CAR) and the University Relations Program (URP), and this joint relationship expresses its mission. An extensively externally networked ISCR cost-effectively expands the level and scope of national computational science expertise available to the Laboratory through CAR. The URP, with its infrastructure for managing six institutes and numerous educational programs at LLNL, assumes much of the logistical burden that is unavoidable in bridging the Laboratory's internal computational research environment with that of the academic community. As large-scale simulations on the parallel platforms of DOE's Advanced Simulation and Computing (ASCI) become increasingly important to the overall mission of LLNL, the role of the ISCR expands in importance, accordingly. Relying primarily on non-permanent staffing, the ISCR complements Laboratory research in areas of the computer and information sciences that are needed at the frontier of Laboratory missions. The ISCR strives to be the ''eyes and ears'' of the Laboratory in the computer and information sciences, in keeping the Laboratory aware of and connected to important external advances. It also attempts to be ''feet and hands, in carrying those advances into the Laboratory and incorporating them into practice. In addition to conducting research, the ISCR provides continuing education opportunities to Laboratory personnel, in the form of on-site workshops taught by experts on novel software or hardware technologies. The ISCR also seeks to influence the research community external to the Laboratory to pursue Laboratory-related interests and to train the workforce that will be required by the Laboratory. Part of the performance of this function is interpreting to the external community appropriate (unclassified

  4. Computational intelligence for big data analysis frontier advances and applications

    CERN Document Server

    Dehuri, Satchidananda; Sanyal, Sugata

    2015-01-01

    The work presented in this book is a combination of theoretical advancements of big data analysis, cloud computing, and their potential applications in scientific computing. The theoretical advancements are supported with illustrative examples and its applications in handling real life problems. The applications are mostly undertaken from real life situations. The book discusses major issues pertaining to big data analysis using computational intelligence techniques and some issues of cloud computing. An elaborate bibliography is provided at the end of each chapter. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of big data analysis and cloud computing.

  5. International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics

    CERN Document Server

    DEVELOPMENTS IN RELIABLE COMPUTING

    1999-01-01

    The SCAN conference, the International Symposium on Scientific Com­ puting, Computer Arithmetic and Validated Numerics, takes place bian­ nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec­ tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. I...

  6. Introduction to Bayesian scientific computing ten lectures on subjective computing

    CERN Document Server

    Calvetti, Daniela

    2007-01-01

    A combination of the concepts subjective – or Bayesian – statistics and scientific computing, the book provides an integrated view across numerical linear algebra and computational statistics. Inverse problems act as the bridge between these two fields where the goal is to estimate an unknown parameter that is not directly observable by using measured data and a mathematical model linking the observed and the unknown. Inverse problems are closely related to statistical inference problems, where the observations are used to infer on an underlying probability distribution. This connection between statistical inference and inverse problems is a central topic of the book. Inverse problems are typically ill-posed: small uncertainties in data may propagate in huge uncertainties in the estimates of the unknowns. To cope with such problems, efficient regularization techniques are developed in the framework of numerical analysis. The counterpart of regularization in the framework of statistical inference is the us...

  7. ASCR Cybersecurity for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Piesert, Sean

    2015-02-27

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE to execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.

  8. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  9. A Scientific Cloud Computing Platform for Condensed Matter Physics

    Science.gov (United States)

    Jorissen, K.; Johnson, W.; Vila, F. D.; Rehr, J. J.

    2013-03-01

    Scientific Cloud Computing (SCC) makes possible calculations with high performance computational tools, without the need to purchase or maintain sophisticated hardware and software. We have recently developed an interface dubbed SC2IT that controls on-demand virtual Linux clusters within the Amazon EC2 cloud platform. Using this interface we have developed a more advanced, user-friendly SCC Platform configured especially for condensed matter calculations. This platform contains a GUI, based on a new Java version of SC2IT, that permits calculations of various materials properties. The cloud platform includes Virtual Machines preconfigured for parallel calculations and several precompiled and optimized materials science codes for electronic structure and x-ray and electron spectroscopy. Consequently this SCC makes state-of-the-art condensed matter calculations easy to access for general users. Proof-of-principle performance benchmarks show excellent parallelization and communication performance. Supported by NSF grant OCI-1048052

  10. Writing and Publishing Scientific Articles in Computer Science

    OpenAIRE

    Brandão, Wladmir Cardoso

    2015-01-01

    Over 15 years of teaching, advising students and coordinating scientific research activities and projects in computer science, we have observed the difficulties of students to write scientific papers to present the results of their research practices. In addition, they repeatedly have doubts about the publishing process. In this article we propose a conceptual framework to support the writing and publishing of scientific papers in computer science, providing a kind of guide for computer scien...

  11. A Review and Prospect for Scientific and Engineering Computing in China

    Institute of Scientific and Technical Information of China (English)

    Yu Dehao

    2002-01-01

    The rise of scientific computing was one of the most important advances in the S&T progress during the second half of the 20th century. Parallel with theoretical exploration and scientific experiments,scientific computing has become the "third means" for scientific activities in the world today. The article gives a panoramic review of the subject during the past 50 years in China and lists the contributions made by Chinese scientists in this field. In addition, it reveals some key contents of related projects in the national research plan and looks into the development vista for the subject in China at the dawning years of the new century.

  12. Scientific and computational challenges of the fusion simulation program (FSP)

    International Nuclear Information System (INIS)

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) - a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  13. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    International Nuclear Information System (INIS)

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  14. Review of An Introduction to Parallel and Vector Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Lefton, Lew

    2006-06-30

    On one hand, the field of high-performance scientific computing is thriving beyond measure. Performance of leading-edge systems on scientific calculations, as measured say by the Top500 list, has increased by an astounding factor of 8000 during the 15-year period from 1993 to 2008, which is slightly faster even than Moore's Law. Even more importantly, remarkable advances in numerical algorithms, numerical libraries and parallel programming environments have led to improvements in the scope of what can be computed that are entirely on a par with the advances in computing hardware. And these successes have spread far beyond the confines of large government-operated laboratories, many universities, modest-sized research institutes and private firms now operate clusters that differ only in scale from the behemoth systems at the large-scale facilities. In the wake of these recent successes, researchers from fields that heretofore have not been part of the scientific computing world have been drawn into the arena. For example, at the recent SC07 conference, the exhibit hall, which long has hosted displays from leading computer systems vendors and government laboratories, featured some 70 exhibitors who had not previously participated. In spite of all these exciting developments, and in spite of the clear need to present these concepts to a much broader technical audience, there is a perplexing dearth of training material and textbooks in the field, particularly at the introductory level. Only a handful of universities offer coursework in the specific area of highly parallel scientific computing, and instructors of such courses typically rely on custom-assembled material. For example, the present reviewer and Robert F. Lucas relied on materials assembled in a somewhat ad-hoc fashion from colleagues and personal resources when presenting a course on parallel scientific computing at the University of California, Berkeley, a few years ago. Thus it is indeed refreshing

  15. Power-aware applications for scientific cluster and distributed computing

    OpenAIRE

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Grosso, Paola; Hillegas, Curtis; Holzman, Burt; Janssen, Ruben L.; Klous, Sander; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    The aggregate power use of computing hardware is an important cost factor in scientific cluster and distributed computing systems. The Worldwide LHC Computing Grid (WLCG) is a major example of such a distributed computing system, used primarily for high throughput computing (HTC) applications. It has a computing capacity and power consumption rivaling that of the largest supercomputers. The computing capacity required from this system is also expected to grow over the next decade. Optimizing ...

  16. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  17. Scientific and computational challenges of the fusion simulation project (FSP)

    International Nuclear Information System (INIS)

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Project (FSP). The primary objective is to develop advanced software designed to use leadership-class computers for carrying out multiscale physics simulations to provide information vital to delivering a realistic integrated fusion simulation model with unprecedented physics fidelity. This multiphysics capability will be unprecedented in that in the current FES applications domain, the largest-scale codes are used to carry out first-principles simulations of mostly individual phenomena in realistic 3D geometry while the integrated models are much smaller-scale, lower-dimensionality codes with significant empirical elements used for modeling and designing experiments. The FSP is expected to be the most up-to-date embodiment of the theoretical and experimental understanding of magnetically confined thermonuclear plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing a reliable ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices on all relevant time and space scales. From a computational perspective, the fusion energy science application goal to produce high-fidelity, whole-device modeling capabilities will demand computing resources in the petascale range and beyond, together with the associated multicore algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative device involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics

  18. Scientific and computational challenges of the fusion simulation project (FSP)

    Science.gov (United States)

    Tang, W. M.

    2008-07-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Project (FSP). The primary objective is to develop advanced software designed to use leadership-class computers for carrying out multiscale physics simulations to provide information vital to delivering a realistic integrated fusion simulation model with unprecedented physics fidelity. This multiphysics capability will be unprecedented in that in the current FES applications domain, the largest-scale codes are used to carry out first-principles simulations of mostly individual phenomena in realistic 3D geometry while the integrated models are much smaller-scale, lower-dimensionality codes with significant empirical elements used for modeling and designing experiments. The FSP is expected to be the most up-to-date embodiment of the theoretical and experimental understanding of magnetically confined thermonuclear plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing a reliable ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices on all relevant time and space scales. From a computational perspective, the fusion energy science application goal to produce high-fidelity, whole-device modeling capabilities will demand computing resources in the petascale range and beyond, together with the associated multicore algorithmic formulation needed to address burning plasma issues relevant to ITER — a multibillion dollar collaborative device involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied

  19. High speed and large scale scientific computing

    CERN Document Server

    Gentzsch, W; Joubert, GR

    2010-01-01

    Over the years parallel technologies have completely transformed main stream computing. This book deals with the issues related to the area of cloud computing and discusses developments in grids, applications and information processing, as well as e-science. It is suitable for computer scientists, IT engineers and IT managers.

  20. InSAR Scientific Computing Environment

    Science.gov (United States)

    Rosen, Paul A.; Sacco, Gian Franco; Gurrola, Eric M.; Zabker, Howard A.

    2011-01-01

    This computing environment is the next generation of geodetic image processing technology for repeat-pass Interferometric Synthetic Aperture (InSAR) sensors, identified by the community as a needed capability to provide flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products. This software allows users of interferometric radar data the flexibility to process from Level 0 to Level 4 products using a variety of algorithms and for a range of available sensors. There are many radar satellites in orbit today delivering to the science community data of unprecedented quantity and quality, making possible large-scale studies in climate research, natural hazards, and the Earth's ecosystem. The proposed DESDynI mission, now under consideration by NASA for launch later in this decade, would provide time series and multiimage measurements that permit 4D models of Earth surface processes so that, for example, climate-induced changes over time would become apparent and quantifiable. This advanced data processing technology, applied to a global data set such as from the proposed DESDynI mission, enables a new class of analyses at time and spatial scales unavailable using current approaches. This software implements an accurate, extensible, and modular processing system designed to realize the full potential of InSAR data from future missions such as the proposed DESDynI, existing radar satellite data, as well as data from the NASA UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar), and other airborne platforms. The processing approach has been re-thought in order to enable multi-scene analysis by adding new algorithms and data interfaces, to permit user-reconfigurable operation and extensibility, and to capitalize on codes already developed by NASA and the science community. The framework incorporates modern programming methods based on recent research, including object-oriented scripts controlling legacy and

  1. Advances in Computer Science and Engineering

    CERN Document Server

    Second International Conference on Advances in Computer Science and Engineering (CES 2012)

    2012-01-01

    This book includes the proceedings of the second International Conference on Advances in Computer Science and Engineering (CES 2012), which was held during January 13-14, 2012 in Sanya, China. The papers in these proceedings of CES 2012 focus on the researchers’ advanced works in their fields of Computer Science and Engineering mainly organized in four topics, (1) Software Engineering, (2) Intelligent Computing, (3) Computer Networks, and (4) Artificial Intelligence Software.

  2. Computer-supported analysis of scientific measurements

    NARCIS (Netherlands)

    Jong, de Hidde

    1998-01-01

    In the past decade, large-scale databases and knowledge bases have become available to researchers working in a range of scientific disciplines. In many cases these databases and knowledge bases contain measurements of properties of physical objects which have been obtained in experiments or at obse

  3. Scientific computing for scientists and engineers

    CERN Document Server

    Heister, Timo

    2015-01-01

    Nowadays most mathematics done in practice is done on a computer. In engineering it is necessary to solve more than 1 million equations simultaneously, and computers can be used to reduce the calculation time from years to minutes or even seconds. This book explains: How can we approximate these important mathematical processes? How accurate are our approximations? How efficient are our approximations?

  4. Introduction to numerical analysis and scientific computing

    CERN Document Server

    Nassif, Nabil

    2013-01-01

    Computer Number Systems and Floating Point Arithmetic Introduction Conversion from Base 10 to Base 2Conversion from Base 2 to Base 10Normalized Floating Point SystemsFloating Point OperationsComputing in a Floating Point SystemFinding Roots of Real Single-Valued Functions Introduction How to Locate the Roots of a Function The Bisection Method Newton's Method The Secant MethodSolving Systems of Linear Equations by Gaussian Elimination Mathematical Preliminaries Computer Storage for Matrices. Data Structures Back Substitution for Upper Triangular Systems Gauss Reduction LU DecompositionPolynomia

  5. 9th International Conference on Advanced Computing & Communication Technologies

    CERN Document Server

    Mandal, Jyotsna; Auluck, Nitin; Nagarajaram, H

    2016-01-01

    This book highlights a collection of high-quality peer-reviewed research papers presented at the Ninth International Conference on Advanced Computing & Communication Technologies (ICACCT-2015) held at Asia Pacific Institute of Information Technology, Panipat, India during 27–29 November 2015. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academia and industry present their original work and exchange ideas, information, techniques and applications in the field of Advanced Computing and Communication Technology.

  6. Compiler Technology for Parallel Scientific Computation

    OpenAIRE

    Can Özturan; Balaram Sinharoy; Szymanski, Boleslaw K.

    1994-01-01

    There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independen...

  7. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU; Qiang

    2004-01-01

    The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……

  8. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU Qiang

    2004-01-01

    @@ The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.

  9. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  10. Random Numbers in Scientific Computing: An Introduction

    CERN Document Server

    Katzgraber, Helmut G

    2010-01-01

    Random numbers play a crucial role in science and industry. Many numerical methods require the use of random numbers, in particular the Monte Carlo method. Therefore it is of paramount importance to have efficient random number generators. The differences, advantages and disadvantages of true and pseudo random number generators are discussed with an emphasis on the intrinsic details of modern and fast pseudo random number generators. Furthermore, standard tests to verify the quality of the random numbers produced by a given generator are outlined. Finally, standard scientific libraries with built-in generators are presented, as well as different approaches to generate nonuniform random numbers. Potential problems that one might encounter when using large parallel machines are discussed.

  11. Introduction to scientific computing and data analysis

    CERN Document Server

    Holmes, Mark H

    2016-01-01

    This textbook provides and introduction to numerical computing and its applications in science and engineering. The topics covered include those usually found in an introductory course, as well as those that arise in data analysis. This includes optimization and regression based methods using a singular value decomposition. The emphasis is on problem solving, and there are numerous exercises throughout the text concerning applications in engineering and science. The essential role of the mathematical theory underlying the methods is also considered, both for understanding how the method works, as well as how the error in the computation depends on the method being used. The MATLAB codes used to produce most of the figures and data tables in the text are available on the author’s website and SpringerLink.

  12. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  13. Python for Scientific Computing Education: Modeling of Queueing Systems

    OpenAIRE

    Vladimiras Dolgopolovas; Valentina Dagienė; Saulius Minkevičius; Leonidas Sakalauskas

    2014-01-01

    In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  14. Advanced in Computer Science and its Applications

    CERN Document Server

    Yen, Neil; Park, James; CSA 2013

    2014-01-01

    The theme of CSA is focused on the various aspects of computer science and its applications for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of computer science and its applications. Therefore this book will be include the various theories and practical applications in computer science and its applications.

  15. Activities of the Research Institute for Advanced Computer Science

    Science.gov (United States)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  16. Scientific computing an introduction using Maple and Matlab

    CERN Document Server

    Gander, Walter; Kwok, Felix

    2014-01-01

    Scientific computing is the study of how to use computers effectively to solve problems that arise from the mathematical modeling of phenomena in science and engineering. It is based on mathematics, numerical and symbolic/algebraic computations and visualization. This book serves as an introduction to both the theory and practice of scientific computing, with each chapter presenting the basic algorithms that serve as the workhorses of many scientific codes; we explain both the theory behind these algorithms and how they must be implemented in order to work reliably in finite-precision arithmetic. The book includes many programs written in Matlab and Maple – Maple is often used to derive numerical algorithms, whereas Matlab is used to implement them. The theory is developed in such a way that students can learn by themselves as they work through the text. Each chapter contains numerous examples and problems to help readers understand the material “hands-on”.

  17. Bringing Advanced Computational Techniques to Energy Research

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  18. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  19. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  20. High-Performance Cloud Computing: A View of Scientific Applications

    CERN Document Server

    Vecchiola, Christian; Buyya, Rajkumar

    2009-01-01

    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure...

  1. Power-aware applications for scientific cluster and distributed computing

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Grosso, Paola; Hillegas, Curtis; Holzman, Burt; Klous, Sander; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    The aggregate power use of computing hardware is an important cost factor in scientific cluster and distributed computing systems. The Worldwide LHC Computing Grid (WLCG) is a major example of such a distributed computing system, used primarily for high throughput computing (HTC) applications. It has a computing capacity and power consumption rivaling that of the largest supercomputers. The computing capacity required from this system is also expected to grow over the next decade. Optimizing the power utilization and cost of such systems is thus of great interest. A number of trends currently underway will provide new opportunities for power-aware optimizations. We discuss how power-aware software applications and scheduling might be used to reduce power consumption, both as autonomous entities and as part of a (globally) distributed system. As concrete examples of computing centers we provide information on the large HEP-focused Tier-1 at FNAL, and the Tigress High Performance Computing Center at Princeton U...

  2. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  3. Computer networks and advanced communications

    International Nuclear Information System (INIS)

    One of the major methods for getting the most productivity and benefits from computer usage is networking. However, for those who are contemplating a change from stand-alone computers to a network system, the investigation of actual networks in use presents a paradox: network systems can be highly productive and beneficial; at the same time, these networks can create many complex, frustrating problems. The issue becomes a question of whether the benefits of networking are worth the extra effort and cost. In response to this issue, the authors review in this paper the implementation and management of an actual network in the LSU Petroleum Engineering Department. The network, which has been in operation for four years, is large and diverse (50 computers, 2 sites, PC's, UNIX RISC workstations, etc.). The benefits, costs, and method of operation of this network will be described, and an effort will be made to objectively weigh these elements from the point of view of the average computer user

  4. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Sussman, Alan [University of Maryland

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  5. Towards an explanatory and computational theory of scientific discovery

    OpenAIRE

    Chen, Chaomei; Chen, Yue; Horowitz, Mark; Hou, Haiyan; LIU, ZEYUAN; Pellegrino, Don

    2009-01-01

    We propose an explanatory and computational theory of transformative discoveries in science. The theory is derived from a recurring theme found in a diverse range of scientific change, scientific discovery, and knowledge diffusion theories in philosophy of science, sociology of science, social network analysis, and information science. The theory extends the concept of structural holes from social networks to a broader range of associative networks found in science studies, especially includi...

  6. Advanced laptop and small personal computer technology

    Science.gov (United States)

    Johnson, Roger L.

    1991-01-01

    Advanced laptop and small personal computer technology is presented in the form of the viewgraphs. The following areas of hand carried computers and mobile workstation technology are covered: background, applications, high end products, technology trends, requirements for the Control Center application, and recommendations for the future.

  7. Advanced Biomedical Computing Center (ABCC) | DSITP

    Science.gov (United States)

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  8. Advance Trends in Soft Computing

    CERN Document Server

    Kreinovich, Vladik; Kacprzyk, Janusz; WCSC 2013

    2014-01-01

    This book is the proceedings of the 3rd World Conference on Soft Computing (WCSC), which was held in San Antonio, TX, USA, on December 16-18, 2013. It presents start-of-the-art theory and applications of soft computing together with an in-depth discussion of current and future challenges in the field, providing readers with a 360 degree view on soft computing. Topics range from fuzzy sets, to fuzzy logic, fuzzy mathematics, neuro-fuzzy systems, fuzzy control, decision making in fuzzy environments, image processing and many more. The book is dedicated to Lotfi A. Zadeh, a renowned specialist in signal analysis and control systems research who proposed the idea of fuzzy sets, in which an element may have a partial membership, in the early 1960s, followed by the idea of fuzzy logic, in which a statement can be true only to a certain degree, with degrees described by numbers in the interval [0,1]. The performance of fuzzy systems can often be improved with the help of optimization techniques, e.g. evolutionary co...

  9. Scientific Grand Challenges: Crosscutting Technologies for Computing at the Exascale - February 2-4, 2010, Washington, D.C.

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2011-02-06

    The goal of the "Scientific Grand Challenges - Crosscutting Technologies for Computing at the Exascale" workshop in February 2010, jointly sponsored by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research and the National Nuclear Security Administration, was to identify the elements of a research and development agenda that will address these challenges and create a comprehensive exascale computing environment. This exascale computing environment will enable the science applications identified in the eight previously held Scientific Grand Challenges Workshop Series.

  10. Report on the scientifical feasibility of advanced separation

    International Nuclear Information System (INIS)

    The advanced separation process Purex has been retained for the recovery of neptunium, technetium and iodine from high level and long lived radioactive wastes. Complementary solvent extraction processes will be used for the recovery of americium, curium and cesium from the high activity effluents of the spent fuel reprocessing treatment. This document presents the researches carried out to demonstrate the scientifical feasibility of the advanced separation processes: the adaptation of the Purex process would allow the recovery of 99% of the neptunium, while the association of the Diamex and Sanex (low acidity variant) processes, or the Paladin concept (single cycle with selective de-extraction of actinides) make it possible the recovery of 99.8% of the actinides III (americium and curium) with a high lanthanides decontamination factor (greater than 150). The feasibility of the americium/curium separation is demonstrated with the Sesame process (extraction of americium IV after electrolytic oxidation). Iodine is today recovered at about 99% with the Purex process and the dissolved fraction of technetium is also recovered at 99% using an adaptation of the Purex process. The non-dissolved fraction is retained by intermetallic compounds in dissolution residues. Cesium is separable from other fission products with recovery levels greater than 99.9% thanks to the use of functionalized calixarenes. The scientifical feasibility of advanced separation is thus demonstrated. (J.S.)

  11. Second Annual AEC Scientific Computer Information Exhange Meeting. Proceedings of the technical program theme: computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Peskin,A.M.; Shimamoto, Y.

    1974-01-01

    The topic of computer graphics serves well to illustrate that AEC affiliated scientific computing installations are well represented in the forefront of computing science activities. The participant response to the technical program was overwhelming--both in number of contributions and quality of the work described. Session I, entitled Advanced Systems, contains presentations describing systems that contain features not generally found in graphics facilities. These features can be roughly classified as extensions of standard two-dimensional monochromatic imaging to higher dimensions including color and time as well as multidimensional metrics. Session II presents seven diverse applications ranging from high energy physics to medicine. Session III describes a number of important developments in establishing facilities, techniques and enhancements in the computer graphics area. Although an attempt was made to schedule as many of these worthwhile presentations as possible, it appeared impossible to do so given the scheduling constraints of the meeting. A number of prospective presenters 'came to the rescue' by graciously withdrawing from the sessions. Some of their abstracts have been included in the Proceedings.

  12. Advanced Computer Algebra for Determinants

    CERN Document Server

    Koutschan, Christoph

    2011-01-01

    We prove three conjectures concerning the evaluation of determinants, which are related to the counting of plane partitions and rhombus tilings. One of them has been posed by George Andrews in 1980, the other two are by Guoce Xin and Christian Krattenthaler. Our proofs employ computer algebra methods, namely the holonomic ansatz proposed by Doron Zeilberger and variations thereof. These variations make Zeilberger's original approach even more powerful and allow for addressing a wider variety of determinants. Finally we present, as a challenge problem, a conjecture about a closed form evaluation of Andrews's determinant.

  13. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  14. A scientific case study of an advanced LISA mission

    International Nuclear Information System (INIS)

    A brief status report of an ongoing scientific case study of the Advanced Laser Interferometer Antenna (ALIA) mission is presented. Key technology requirements and primary science objectives of the mission are covered in the study. Possible descope options for the mission and the corresponding compromise in science are also considered and compared. Our preliminary study indicates that ALIA holds promise in mapping out the mass and spin distribution of intermediate mass black holes possibly present in dense star clusters at low redshift as well as in shedding important light on the structure formation in the early Universe.

  15. Foreword: Advanced Science Letters (ASL), Special Issue on Computational Astrophysics

    CERN Document Server

    ,

    2009-01-01

    Computational astrophysics has undergone unprecedented development over the last decade, becoming a field of its own. The challenge ahead of us will involve increasingly complex multi-scale simulations. These will bridge the gap between areas of astrophysics such as star and planet formation, or star formation and galaxy formation, that have evolved separately until today. A global knowledge of the physics and modeling techniques of astrophysical simulations is thus an important asset for the next generation of modelers. With the aim at fostering such a global approach, we present the Special Issue on Computational Astrophysics for the Advanced Science Letters (http://www.aspbs.com/science.htm). The Advanced Science Letters (ASL) is a new multi-disciplinary scientific journal which will cover extensively computational astrophysics and cosmology, and will act as a forum for the presentation and discussion of novel work attempting to connect different research areas. This Special Issue collects 9 reviews on 9 k...

  16. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  17. FPGA Based Quadruple Precision Floating Point Arithmetic for Scientific Computations

    OpenAIRE

    Mamidi Nagaraju; Geedimatla Shekar

    2012-01-01

    In this project we explore the capability and flexibility of FPGA solutions in a sense to accelerate scientific computing applications which require very high precision arithmetic, based on IEEE 754 standard 128-bit floating-point number representations. Field Programmable Gate Arrays (FPGA) is increasingly being used to design high end computationally intense microprocessors capable of handling floating point mathematical operations. Quadruple Precision Floating-Point Arithmetic is important...

  18. Performance of scientific computing platforms running MCNP4B

    International Nuclear Information System (INIS)

    A performance study has been made for the MCNP4B Monte Carlo radiation transport code on a wide variety of scientific computing platforms ranging from personal computers to Cray mainframes. We present the timing study results using MCNP4B and its new test set and libraries. This timing study is unlike other timing studies because of its widespread reproducibility, its direct comparability to the predecessor study in 1993, and its focus upon a nuclear engineering code

  19. Computational modelling, explicit mathematical treatments, and scientific explanation

    OpenAIRE

    Bryden, J; Noble, J

    2006-01-01

    A computer simulation model, can produce some interesting and surprising results which one would not expect from initial analysis of the algorithm and data. We question however, whether the description of such a computer simulation modelling procedure (data + algorithm + results) can constitute an explanation as to why the algorithm produces such an effect. Specifically, in the field of theoretical biology, can such a procedure constitute real scientific explanation of biological phenomena? W...

  20. National Energy Research Scientific Computing Center 2007 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  1. Ontology-Driven Discovery of Scientific Computational Entities

    Science.gov (United States)

    Brazier, Pearl W.

    2010-01-01

    Many geoscientists use modern computational resources, such as software applications, Web services, scientific workflows and datasets that are readily available on the Internet, to support their research and many common tasks. These resources are often shared via human contact and sometimes stored in data portals; however, they are not necessarily…

  2. Topics in numerical partial differential equations and scientific computing

    CERN Document Server

    2016-01-01

    Numerical partial differential equations (PDEs) are an important part of numerical simulation, the third component of the modern methodology for science and engineering, besides the traditional theory and experiment. This volume contains papers that originated with the collaborative research of the teams that participated in the IMA Workshop for Women in Applied Mathematics: Numerical Partial Differential Equations and Scientific Computing in August 2014.

  3. Introducing scientific computation from high school to college: the case of Modellus

    Science.gov (United States)

    Teodoro, Vítor; Neves, Rui

    2009-03-01

    The development of computational tools and methods has changed the way science is done. This change, however, is far from being accomplished on high school and college curricula, where computers are mainly used for showing text, images and animations. Most curricula do not consider the use of computational scientific tools, particularly tools where students can manipulate and build mathematical models, as an integral part of the learning experiences all students must have. In this paper, we discuss how Modellus, a freely available software tool (created in Java and available for all operating systems) can be used to support curricula where students from the age of 12 to college years can be introduced to scientific computation. We will also show how such a wide range of learners and their teachers can use Modellus to implement simple numerical methods and interactive animations based on those methods to explore advanced mathematical and physical reasoning.

  4. The advanced computational testing and simulation toolkit (ACTS)

    Energy Technology Data Exchange (ETDEWEB)

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  5. The advanced computational testing and simulation toolkit (ACTS)

    International Nuclear Information System (INIS)

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  6. Computational electromagnetics recent advances and engineering applications

    CERN Document Server

    2014-01-01

    Emerging Topics in Computational Electromagnetics in Computational Electromagnetics presents advances in Computational Electromagnetics. This book is designed to fill the existing gap in current CEM literature that only cover the conventional numerical techniques for solving traditional EM problems. The book examines new algorithms, and applications of these algorithms for solving problems of current interest that are not readily amenable to efficient treatment by using the existing techniques. The authors discuss solution techniques for problems arising in nanotechnology, bioEM, metamaterials, as well as multiscale problems. They present techniques that utilize recent advances in computer technology, such as parallel architectures, and the increasing need to solve large and complex problems in a time efficient manner by using highly scalable algorithms.

  7. Computer code qualification program for the Advanced CANDU Reactor

    International Nuclear Information System (INIS)

    Atomic Energy of Canada Ltd (AECL) has developed and implemented a Software Quality Assurance program (SQA) to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. This paper provides an overview of the computer programs used in Advanced CANDU Reactor (ACR) safety analysis, and assessment of their applicability in the safety analyses of the ACR design. An outline of the incremental validation program, and an overview of the experimental program in support of the code validation are also presented. An outline of the SQA program used to qualify these computer codes is also briefly presented. To provide context to the differences in the SQA with respect to current CANDUs, the paper also provides an overview of the ACR design features that have an impact on the computer code qualification. (author)

  8. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  9. [Activities of Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  10. Trend Analysis of the Brazilian Scientific Production in Computer Science

    Directory of Open Access Journals (Sweden)

    TRUCOLO, C. C.

    2014-12-01

    Full Text Available The growth of scientific information volume and diversity brings new challenges in order to understand the reasons, the process and the real essence that propel this growth. This information can be used as the basis for the development of strategies and public politics to improve the education and innovation services. Trend analysis is one of the steps in this way. In this work, trend analysis of Brazilian scientific production of graduate programs in the computer science area is made to identify the main subjects being studied by these programs in general and individual ways.

  11. Implementation of Scientific Computing Applications on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Guochun Shi

    2009-01-01

    Full Text Available The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.

  12. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  13. Initial explorations of ARM processors for scientific computing

    International Nuclear Information System (INIS)

    Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software

  14. Initial explorations of ARM processors for scientific computing

    Science.gov (United States)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad

    2014-06-01

    Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software.

  15. Initial Explorations of ARM Processors for Scientific Computing

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Muzaffar, Shahzad

    2013-01-01

    Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software.

  16. Computational Intelligence Paradigms in Advanced Pattern Classification

    CERN Document Server

    Jain, Lakhmi

    2012-01-01

    This monograph presents selected areas of application of pattern recognition and classification approaches including handwriting recognition, medical image analysis and interpretation, development of cognitive systems for image computer understanding, moving object detection, advanced image filtration and intelligent multi-object labelling and classification. It is directed to the scientists, application engineers, professors, professors and students will find this book useful.

  17. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    Science.gov (United States)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  18. The Potential of the Cell Processor for Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  19. Scientific opportunities with advanced facilities for neutron scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lander, G.H.; Emery, V.J. (eds.)

    1984-01-01

    The present report documents deliberations of a large group of experts in neutron scattering and fundamental physics on the need for new neutron sources of greater intensity and more sophisticated instrumentation than those currently available. An additional aspect of the Workshop was a comparison between steady-state (reactor) and pulsed (spallation) sources. The main conclusions were: (1) the case for a new higher flux neutron source is extremely strong and such a facility will lead to qualitatively new advances in condensed matter science and fundamental physics; (2) to a large extent the future needs of the scientific community could be met with either a 5 x 10/sup 15/ n cm/sup -2/s/sup -1/ steady state source or a 10/sup 17/ n cm/sup -2/s/sup -1/ peak flux spallation source; and (3) the findings of this Workshop are consistent with the recommendations of the Major Materials Facilities Committee.

  20. Research on Scientific Data Sharing and Distribution Policy in Advanced Manufacturing and Automation Fields

    OpenAIRE

    Li, Liya; Song, Yang; Guo, Qiumei

    2007-01-01

    Scientific data sharing is a long-term and complicated task. The related data sharing and distribution policies are prime concerns. By using both domestic and international experiences in scientific data sharing, the sources, distribution, and classification of scientific data in advanced manufacturing and automation are discussed. A primary data sharing and distribution policy in advanced manufacture and automation is introduced.

  1. Technologies for Large Data Management in Scientific Computing

    CERN Document Server

    Pace, A

    2014-01-01

    In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focusses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.

  2. Extending scientific computing system with structural quantum programming capabilities

    OpenAIRE

    Gawron, P.; Klamka, J.; Miszczak, J. A.; Winiarczyk, R.

    2010-01-01

    We present a basic high-level structures used for developing quantum programming languages. The presented structures are commonly used in many existing quantum programming languages and we use quantum pseudo-code based on QCL quantum programming language to describe them. We also present the implementation of introduced structures in GNU Octave language for scientific computing. Procedures used in the implementation are available as a package quantum-octave, providing a library of functions, ...

  3. Computer simulations and the changing face of scientific experimentation

    CERN Document Server

    Duran, Juan M

    2013-01-01

    Computer simulations have become a central tool for scientific practice. Their use has replaced, in many cases, standard experimental procedures. This goes without mentioning cases where the target system is empirical but there are no techniques for direct manipulation of the system, such as astronomical observation. To these cases, computer simulations have proved to be of central importance. The question about their use and implementation, therefore, is not only a technical one but represents a challenge for the humanities as well. In this volume, scientists, historians, and philosophers joi

  4. Object-Oriented Design for FDTD Visual Scientific Computing

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and separating from the iteration procedure, the visual software can be adapted to more comprehensive computing problems. Real-time gray degree graphic and wave curve of the results can be achieved using DirectX technique. The special difference equation and data structure in dispersive medium are considered, and the peculiarity of parameters in perfectly matched layer are also discussed.``

  5. Small Explorer for Advanced Missions - cubesat for scientific mission

    Science.gov (United States)

    Pronenko, Vira; Ivchenko, Nickolay

    2015-04-01

    A class of nanosatellites is defined by the cubesat standard, primarily setting the interface to the launcher, which allows standardizing cubesat preparation and launch, thus making the projects more affordable. The majority of cubesats have been launched are demonstration or educational missions. For scientific and other advanced missions to fully realize the potential offered by the low cost nanosatellites, there are challenges related to limitations of the existing cubesat platforms and to the availability of small yet sufficiently sensitive sensors. The new project SEAM (Small Explorer for Advanced Missions) was selected for realization in frames of FP-7 European program to develop a set of improved critical subsystems and to construct a prototype nanosatellite in the 3U cubesat envelope for electromagnetic measurements in low Earth orbit. The SEAM consortium will develop and demonstrate in flight for the first time the concept of an electromagnetically clean nanosatellite with precision attitude determination, flexible autonomous data acquisition system, high-bandwidth telemetry and an integrated solution for ground control and data handling. As the first demonstration, the satellite is planned to perform the Space Weather (SW) mission using novel miniature electric and magnetic sensors, able to provide science-grade measurements. To enable sensitive magnetic measurements onboard, the sensors must be deployed on booms to bring them away from the spacecraft body. Also other thorough yet efficient procedures will be developed to provide electromagnetic cleanliness (EMC) of the spacecraft. This work is supported by EC Framework 7 funded project 607197.

  6. Advances in Computer, Communication, Control and Automation

    CERN Document Server

    011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume  topics covered include signal and Image processing, speech and audio Processing, video processing and analysis, artificial intelligence, computing and intelligent systems, machine learning, sensor and neural networks, knowledge discovery and data mining, fuzzy mathematics and Applications, knowledge-based systems, hybrid systems modeling and design, risk analysis and management, system modeling and simulation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  7. Advances in computers improving the web

    CERN Document Server

    Zelkowitz, Marvin

    2010-01-01

    This is volume 78 of Advances in Computers. This series, which began publication in 1960, is the oldest continuously published anthology that chronicles the ever- changing information technology field. In these volumes we publish from 5 to 7 chapters, three times per year, that cover the latest changes to the design, development, use and implications of computer technology on society today.Covers the full breadth of innovations in hardware, software, theory, design, and applications.Many of the in-depth reviews have become standard references that continue to be of significant, lasting value i

  8. Advanced computational approaches to biomedical engineering

    CERN Document Server

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  9. Advanced Test Reactor National Scientific User Facility Partnerships

    Energy Technology Data Exchange (ETDEWEB)

    Frances M. Marshall; Todd R. Allen; Jeff B. Benson; James I. Cole; Mary Catherine Thelen

    2012-03-01

    In 2007, the United States Department of Energy designated the Advanced Test Reactor (ATR), located at Idaho National Laboratory, as a National Scientific User Facility (NSUF). This designation made test space within the ATR and post-irradiation examination (PIE) equipment at INL available for use by researchers via a proposal and peer review process. The goal of the ATR NSUF is to provide researchers with the best ideas access to the most advanced test capability, regardless of the proposer's physical location. Since 2007, the ATR NSUF has expanded its available reactor test space, and obtained access to additional PIE equipment. Recognizing that INL may not have all the desired PIE equipment, or that some equipment may become oversubscribed, the ATR NSUF established a Partnership Program. This program enables and facilitates user access to several university and national laboratories. So far, seven universities and one national laboratory have been added to the ATR NSUF with capability that includes reactor-testing space, PIE equipment, and ion beam irradiation facilities. With the addition of these universities, irradiation can occur in multiple reactors and post-irradiation exams can be performed at multiple universities. In each case, the choice of facilities is based on the user's technical needs. Universities and laboratories included in the ATR NSUF partnership program are as follows: (1) Nuclear Services Laboratories at North Carolina State University; (2) PULSTAR Reactor Facility at North Carolina State University; (3) Michigan Ion Beam Laboratory (1.7 MV Tandetron accelerator) at the University of Michigan; (4) Irradiated Materials at the University of Michigan; (5) Harry Reid Center Radiochemistry Laboratories at University of Nevada, Las Vegas; (6) Characterization Laboratory for Irradiated Materials at the University of Wisconsin-Madison; (7) Tandem Accelerator Ion Beam. (1.7 MV terminal voltage tandem ion accelerator) at the University of

  10. Advanced Test Reactor National Scientific User Facility Partnerships

    International Nuclear Information System (INIS)

    In 2007, the United States Department of Energy designated the Advanced Test Reactor (ATR), located at Idaho National Laboratory, as a National Scientific User Facility (NSUF). This designation made test space within the ATR and post-irradiation examination (PIE) equipment at INL available for use by researchers via a proposal and peer review process. The goal of the ATR NSUF is to provide researchers with the best ideas access to the most advanced test capability, regardless of the proposer's physical location. Since 2007, the ATR NSUF has expanded its available reactor test space, and obtained access to additional PIE equipment. Recognizing that INL may not have all the desired PIE equipment, or that some equipment may become oversubscribed, the ATR NSUF established a Partnership Program. This program enables and facilitates user access to several university and national laboratories. So far, seven universities and one national laboratory have been added to the ATR NSUF with capability that includes reactor-testing space, PIE equipment, and ion beam irradiation facilities. With the addition of these universities, irradiation can occur in multiple reactors and post-irradiation exams can be performed at multiple universities. In each case, the choice of facilities is based on the user's technical needs. Universities and laboratories included in the ATR NSUF partnership program are as follows: (1) Nuclear Services Laboratories at North Carolina State University; (2) PULSTAR Reactor Facility at North Carolina State University; (3) Michigan Ion Beam Laboratory (1.7 MV Tandetron accelerator) at the University of Michigan; (4) Irradiated Materials at the University of Michigan; (5) Harry Reid Center Radiochemistry Laboratories at University of Nevada, Las Vegas; (6) Characterization Laboratory for Irradiated Materials at the University of Wisconsin-Madison; (7) Tandem Accelerator Ion Beam. (1.7 MV terminal voltage tandem ion accelerator) at the University of Wisconsin

  11. 10th International Conference on Scientific Computing in Electrical Engineering

    CERN Document Server

    Clemens, Markus; Günther, Michael; Maten, E

    2016-01-01

    This book is a collection of selected papers presented at the 10th International Conference on Scientific Computing in Electrical Engineering (SCEE), held in Wuppertal, Germany in 2014. The book is divided into five parts, reflecting the main directions of SCEE 2014: 1. Device Modeling, Electric Circuits and Simulation, 2. Computational Electromagnetics, 3. Coupled Problems, 4. Model Order Reduction, and 5. Uncertainty Quantification. Each part starts with a general introduction followed by the actual papers. The aim of the SCEE 2014 conference was to bring together scientists from academia and industry, mathematicians, electrical engineers, computer scientists, and physicists, with the goal of fostering intensive discussions on industrially relevant mathematical problems, with an emphasis on the modeling and numerical simulation of electronic circuits and devices, electromagnetic fields, and coupled problems. The methodological focus was on model order reduction and uncertainty quantification.

  12. Strategic Plan for a Scientific Cloud Computing infrastructure for Europe

    CERN Document Server

    Lengert, Maryline

    2011-01-01

    Here we present the vision, concept and direction for forming a European Industrial Strategy for a Scientific Cloud Computing Infrastructure to be implemented by 2020. This will be the framework for decisions and for securing support and approval in establishing, initially, an R&D European Cloud Computing Infrastructure that serves the need of European Research Area (ERA ) and Space Agencies. This Cloud Infrastructure will have the potential beyond this initial user base to evolve to provide similar services to a broad range of customers including government and SMEs. We explain how this plan aims to support the broader strategic goals of our organisations and identify the benefits to be realised by adopting an industrial Cloud Computing model. We also outline the prerequisites and commitment needed to achieve these objectives.

  13. Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  14. Comparison of Scientific Calipers and Computer-Enabled CT Review for the Measurement of Skull Base and Craniomaxillofacial Dimensions

    OpenAIRE

    Citardi, Martin J; Herrmann, Brian; Hollenbeak, Chris S.; Stack, Brendan C.; Cooper, Margaret; Bucholz, Richard D.

    2001-01-01

    Traditionally, cadaveric studies and plain-film cephalometrics provided information about craniomaxillofacial proportions and measurements; however, advances in computer technology now permit software-based review of computed tomography (CT)-based models. Distances between standardized anatomic points were measured on five dried human skulls with standard scientific calipers (Geneva Gauge, Albany, NY) and through computer workstation (StealthStation 2.6.4, Medtronic Surgical Navigation Techno...

  15. InSAR Scientific Computing Environment on the Cloud

    Science.gov (United States)

    Rosen, P. A.; Shams, K. S.; Gurrola, E. M.; George, B. A.; Knight, D. S.

    2012-12-01

    In response to the needs of the international scientific and operational Earth observation communities, spaceborne Synthetic Aperture Radar (SAR) systems are being tasked to produce enormous volumes of raw data daily, with availability to scientists to increase substantially as more satellites come online and data becomes more accessible through more open data policies. The availability of these unprecedentedly dense and rich datasets has led to the development of sophisticated algorithms that can take advantage of them. In particular, interferometric time series analysis of SAR data provides insights into the changing earth and requires substantial computational power to process data across large regions and over large time periods. This poses challenges for existing infrastructure, software, and techniques required to process, store, and deliver the results to the global community of scientists. The current state-of-the-art solutions employ traditional data storage and processing applications that require download of data to the local repositories before processing. This approach is becoming untenable in light of the enormous volume of data that must be processed in an iterative and collaborative manner. We have analyzed and tested new cloud computing and virtualization approaches to address these challenges within the context of InSAR in the earth science community. Cloud computing is democratizing computational and storage capabilities for science users across the world. The NASA Jet Propulsion Laboratory has been an early adopter of this technology, successfully integrating cloud computing in a variety of production applications ranging from mission operations to downlink data processing. We have ported a new InSAR processing suite called ISCE (InSAR Scientific Computing Environment) to a scalable distributed system running in the Amazon GovCloud to demonstrate the efficacy of cloud computing for this application. We have integrated ISCE with Polyphony to

  16. Phantom limbs: pain, embodiment, and scientific advances in integrative therapies.

    Science.gov (United States)

    Lenggenhager, Bigna; Arnold, Carolyn A; Giummarra, Melita J

    2014-03-01

    Research over the past two decades has begun to identify some of the key mechanisms underlying phantom limb pain and sensations; however, this continues to be a clinically challenging condition to manage. Treatment of phantom pain, like all chronic pain conditions, demands a holistic approach that takes into consideration peripheral, spinal, and central neuroplastic mechanisms. In this review, we focus on nonpharmacological treatments tailored to reverse the maladaptive neuroplasticity associated with phantom pain. Recent scientific advances emerging from interdisciplinary research between neuroscience, virtual reality, robotics, and prosthetics show the greatest promise for alternative embodiment and maintaining the integrity of the multifaceted representation of the body in the brain. Importantly, these advances have been found to prevent and reduce phantom limb pain. In particular, therapies that involve sensory and/or motor retraining, most naturally through the use of integrative prosthetic devices, as well as peripheral (e.g., transcutaneous electrical nerve stimulation) or central (e.g., transcranial magnetic stimulation or deep brain stimulation) stimulation techniques, have been found to both restore the neural representation of the missing limb and to reduce the intensity of phantom pain. While the evidence for the efficacy of these therapies is mounting, but well-controlled and large-scale studies are still needed. WIREs Cogn Sci 2014, 5:221-231. doi: 10.1002/wcs.1277 CONFLICT OF INTEREST: The authors have no financial or other relationship that might lead to a conflict of interest. For further resources related to this article, please visit the WIREs website. PMID:26304309

  17. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  18. Java Performance for Scientific Applications on LLNL Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kapfer, C; Wissink, A

    2002-05-10

    Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part of the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.

  19. Effective use of multicore-based parallel computers for scientific computing

    OpenAIRE

    2012-01-01

    This thesis studies how the multi-core hardware architecture can be efficiently used for real-world scientific applications that arise from computational cardiology and computational geoscience. The investigation has been carried out from different angles: numerical algorithms, parallel programming and performance modeling and prediction. It is shown that high-performance implementations and optimizations must match both the underlying computations and the target parallel platform. Several go...

  20. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hendrickson, Bruce [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wade, Doug [National Nuclear Security Administration (NNSA), Washington, DC (United States). Office of Advanced Simulation and Computing and Institutional Research and Development; Hoang, Thuc [National Nuclear Security Administration (NNSA), Washington, DC (United States). Computational Systems and Software Environment

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  1. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    Science.gov (United States)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  2. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  3. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  4. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  5. Acts -- A collection of high performing software tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Drummond, L.A.; Marques, O.A.

    2002-11-01

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Further, many new discoveries depend on high performance computer simulations to satisfy their demands for large computational resources and short response time. The Advanced CompuTational Software (ACTS) Collection brings together a number of general-purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS collection promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS. It also highlight the tools that are in demand by Climate and Weather modelers.

  6. The Advanced Test Reactor as a National Scientific User Facility

    International Nuclear Information System (INIS)

    The Advanced Test Reactor (ATR) has been in operation since 1967 and mainly used to support U.S. Department of Energy (US DOE) materials and fuels research programs. Irradiation capabilities of the ATR and post-irradiation examination capabilities of the Idaho National Laboratory (INL) were generally not being utilized by universities and other potential users due largely to a prohibitive pricing structure. While materials and fuels testing programs using the ATR continue to be needed for US DOE programs such as the Advanced Fuel Cycle Initiative and Next Generation Nuclear Plant, US DOE recognized there was a national need to make these capabilities available to a broader user base. In April 2007, the U.S. Department of Energy designated the Advanced Test Reactor (ATR) as a National Scientific User Facility (NSUF). As a NSUF, most of the services associated with university experiment irradiation and post-irradiation examinations are provided free-of-charge. The US DOE is providing these services to support U.S. leadership in nuclear science, technology, and education and to encourage active university/industry/laboratory collaboration. The first full year of implementing the user facility concept was 2008 and it was a very successful year. The first university experiment pilot project was developed in collaboration with the University of Wisconsin and began irradiation in the ATR in 2008. Lessons learned from this pilot program will be applied to future NSUF projects. Five other university experiments were also competitively selected in March 2008 from the initial solicitation for proposals. The NSUF now has a continually open process where universities can submit proposals as they are ready. Plans are to invest in new and upgraded capabilities at the ATR, post-irradiation examination capabilities at the INL, and in a new experiment assembly facility to further support the implementation of the user facility concept. Through a newly created Partnership Program

  7. The Advanced Test Reactor National Scientific User Facility

    Energy Technology Data Exchange (ETDEWEB)

    Todd R. Allen; Collin J. Knight; Jeff B. Benson; Frances M. Marshall; Mitchell K. Meyer; Mary Catherine Thelen

    2011-08-01

    In 2007, the Advanced Test Reactor (ATR), located at Idaho National Laboratory (INL), was designated by the Department of Energy (DOE) as a National Scientific User Facility (NSUF). This designation made test space within the ATR and post-irradiation examination (PIE) equipment at INL available for use by approved researchers via a proposal and peer review process. The goal of the ATR NSUF is to provide those researchers with the best ideas access to the most advanced test capability, regardless of the proposer’s physical location. Since 2007, the ATR NSUF has expanded its available reactor test space, obtained access to additional PIE equipment, taken steps to enable the most advanced post-irradiation analysis possible, and initiated an educational program and digital learning library to help potential users better understand the critical issues in reactor technology and how a test reactor facility could be used to address this critical research. Recognizing that INL may not have all the desired PIE equipment, or that some equipment may become oversubscribed, the ATR NSUF established a Partnership Program. This program invited universities to nominate their capability to become part of a broader user facility. Any university is eligible to self-nominate. Any nomination is then peer reviewed to ensure that the addition of the university facilities adds useful capability to the NSUF. Once added to the NSUF team, the university capability is then integral to the NSUF operations and is available to all users via the proposal process. So far, six universities have been added to the ATR NSUF with capability that includes reactor-testing space, PIE equipment, and ion beam irradiation facilities. With the addition of these university capabilities, irradiation can occur in multiple reactors and post-irradiation exams can be performed at multiple universities. In each case, the choice of facilities is based on the user’s technical needs. The current NSUF partners are

  8. Advanced I/O for large-scale scientific applications.

    Energy Technology Data Exchange (ETDEWEB)

    Klasky, Scott (Oak Ridge National Laboratory, Oak Ridge, TN); Schwan, Karsten (Georgia Institute of Technology, Atlanta, GA); Oldfield, Ron A.; Lofstead, Gerald F., II (Georgia Institute of Technology, Atlanta, GA)

    2010-01-01

    As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be

  9. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis

    Science.gov (United States)

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  10. FPGA Based Quadruple Precision Floating Point Arithmetic for Scientific Computations

    Directory of Open Access Journals (Sweden)

    Mamidi Nagaraju

    2012-09-01

    Full Text Available In this project we explore the capability and flexibility of FPGA solutions in a sense to accelerate scientific computing applications which require very high precision arithmetic, based on IEEE 754 standard 128-bit floating-point number representations. Field Programmable Gate Arrays (FPGA is increasingly being used to design high end computationally intense microprocessors capable of handling floating point mathematical operations. Quadruple Precision Floating-Point Arithmetic is important in computational fluid dynamics and physical modelling, which require accurate numerical computations. However, modern computers perform binary arithmetic, which has flaws in representing and rounding the numbers. As the demand for quadruple precision floating point arithmetic is predicted to grow, the IEEE 754 Standard for Floating-Point Arithmetic includes specifications for quadruple precision floating point arithmetic. We implement quadruple precision floating point arithmetic unit for all the common operations, i.e. addition, subtraction, multiplication and division. While previous work has considered circuits for low precision floating-point formats, we consider the implementation of 128-bit quadruple precision circuits. The project will provide arithmetic operation, simulation result, hardware design, Input via PS/2 Keyboard interface and results displayed on LCD using Xilinx virtex5 (XC5VLX110TFF1136 FPGA device.

  11. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  12. Computer-assisted estimating for the Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    An analysis is made of the cost estimating system currently in use at the Los Alamos Scientific Laboratory (LASL) and the benefits of computer assistance are evaluated. A computer-assisted estimating system (CAE) is proposed for LASL. CAE can decrease turnaround and provide more flexible response to management requests for cost information and analyses. It can enhance value optimization at the design stage, improve cost control and change-order justification, and widen the use of cost information in the design process. CAE costs are not well defined at this time although they appear to break even with present operations. It is recommended that a CAE system description be submitted for contractor consideration and bid while LASL system development continues concurrently

  13. Computer-assisted estimating for the Los Alamos Scientific Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Spooner, J.E.

    1976-02-01

    An analysis is made of the cost estimating system currently in use at the Los Alamos Scientific Laboratory (LASL) and the benefits of computer assistance are evaluated. A computer-assisted estimating system (CAE) is proposed for LASL. CAE can decrease turnaround and provide more flexible response to management requests for cost information and analyses. It can enhance value optimization at the design stage, improve cost control and change-order justification, and widen the use of cost information in the design process. CAE costs are not well defined at this time although they appear to break even with present operations. It is recommended that a CAE system description be submitted for contractor consideration and bid while LASL system development continues concurrently.

  14. The advanced test reactor national scientific user facility: advancing nuclear technology education

    International Nuclear Information System (INIS)

    To help ensure the long-term viability of nuclear energy through a robust and sustained research and development effort, the U.S. Department of Energy designated the Idaho National Laboratory (INL) Advanced Test Reactor and associated post-irradiation examination facilities a National Scientific User Facility (ATR NSUF), allowing broader access to nuclear energy researchers. The ATR NSUF provides education programs including a Users Week, internships, faculty student team projects and faculty/staff exchanges. In addition, the ATR NSUF seeks to form strategic partnerships with university facilities that add significant nuclear research capability to the ATR NSUF and are accessible to all ATR NSUF users. (author)

  15. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  16. Computational Design of Advanced Nuclear Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Savrasov, Sergey [Univ. of California, Davis, CA (United States); Kotliar, Gabriel [Rutgers Univ., Piscataway, NJ (United States); Haule, Kristjan [Rutgers Univ., Piscataway, NJ (United States)

    2014-06-03

    The objective of the project was to develop a method for theoretical understanding of nuclear fuel materials whose physical and thermophysical properties can be predicted from first principles using a novel dynamical mean field method for electronic structure calculations. We concentrated our study on uranium, plutonium, their oxides, nitrides, carbides, as well as some rare earth materials whose 4f eletrons provide a simplified framework for understanding complex behavior of the f electrons. We addressed the issues connected to the electronic structure, lattice instabilities, phonon and magnon dynamics as well as thermal conductivity. This allowed us to evaluate characteristics of advanced nuclear fuel systems using computer based simulations and avoid costly experiments.

  17. International Conference on Computers and Advanced Technology in Education

    CERN Document Server

    Advanced Information Technology in Education

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computers and Advanced Technology in Education. With the development of computers and advanced technology, the human social activities are changing basically. Education, especially the education reforms in different countries, has been experiencing the great help from the computers and advanced technology. Generally speaking, education is a field which needs more information, while the computers, advanced technology and internet are a good information provider. Also, with the aid of the computer and advanced technology, persons can make the education an effective combination. Therefore, computers and advanced technology should be regarded as an important media in the modern education. Volume Advanced Information Technology in Education is to provide a forum for researchers, educators, engineers, and government officials involved in the general areas of computers and advanced technology in education to d...

  18. Domain analysis of computational science - Fifty years of a scientific computing group

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, M.

    2010-02-23

    I employed bibliometric- and historical-methods to study the domain of the Scientific Computing group at Brookhaven National Laboratory (BNL) for an extended period of fifty years, from 1958 to 2007. I noted and confirmed the growing emergence of interdisciplinarity within the group. I also identified a strong, consistent mathematics and physics orientation within it.

  19. Carbon Nanotube Computer: Transforming Scientific Discoveries into Working Systems

    Science.gov (United States)

    Mitra, Subhasish

    2014-03-01

    The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy- delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies. However, carbon nanotubes (CNTs) are subject to substantial inherent imperfections that pose major obstacles to the design of robust and very large-scale CNFET digital systems: (i) It is nearly impossible to guarantee perfect alignment and positioning of all CNTs. This limitation introduces stray conducting paths, resulting in incorrect circuit functionality. (ii) CNTs can be metallic or semiconducting depending on chirality. Metallic CNTs cause shorts resulting in excessive leakage and incorrect circuit functionality. A combination of design and processing technique overcomes these challenges by creating robust CNFET digital circuits that are immune to these inherent imperfections. This imperfection-immune design paradigm enables the first experimental demonstration of the carbon nanotube computer, and, more generally, arbitrary digital systems that can be built using CNFETs. The CNT computer is capable of performing multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we emulate 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This is the most complex carbon-based electronic system yet demonstrated. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next

  20. The graphics future in scientific applications-trends and developments in computer graphics

    CERN Document Server

    Enderle, G

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations will appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education.

  1. Institute for scientific computing research;fiscal year 1999 annual report

    Energy Technology Data Exchange (ETDEWEB)

    Keyes, D

    2000-03-28

    Large-scale scientific computation, and all of the disciplines that support it and help to validate it, have been placed at the focus of Lawrence Livermore National Laboratory by the Accelerated Strategic Computing Initiative (ASCI). The Laboratory operates the computer with the highest peak performance in the world and has undertaken some of the largest and most compute-intensive simulations ever performed. Computers at the architectural extremes, however, are notoriously difficult to use efficiently. Even such successes as the Laboratory's two Bell Prizes awarded in November 1999 only emphasize the need for much better ways of interacting with the results of large-scale simulations. Advances in scientific computing research have, therefore, never been more vital to the core missions of the Laboratory than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, the Laboratory must engage researchers at many academic centers of excellence. In FY 1999, the Institute for Scientific Computing Research (ISCR) has expanded the Laboratory's bridge to the academic community in the form of collaborative subcontracts, visiting faculty, student internships, a workshop, and a very active seminar series. ISCR research participants are integrated almost seamlessly with the Laboratory's Center for Applied Scientific Computing (CASC), which, in turn, addresses computational challenges arising throughout the Laboratory. Administratively, the ISCR flourishes under the Laboratory's University Relations Program (URP). Together with the other four Institutes of the URP, it must navigate a course that allows the Laboratory to benefit from academic exchanges while preserving national security. Although FY 1999 brought more than its share of challenges to the operation of an academic-like research enterprise within the context of a national security laboratory, the results declare the

  2. A data management system for engineering and scientific computing

    Science.gov (United States)

    Elliot, L.; Kunii, H. S.; Browne, J. C.

    1978-01-01

    Data elements and relationship definition capabilities for this data management system are explicitly tailored to the needs of engineering and scientific computing. System design was based upon studies of data management problems currently being handled through explicit programming. The system-defined data element types include real scalar numbers, vectors, arrays and special classes of arrays such as sparse arrays and triangular arrays. The data model is hierarchical (tree structured). Multiple views of data are provided at two levels. Subschemas provide multiple structural views of the total data base and multiple mappings for individual record types are supported through the use of a REDEFINES capability. The data definition language and the data manipulation language are designed as extensions to FORTRAN. Examples of the coding of real problems taken from existing practice in the data definition language and the data manipulation language are given.

  3. Computer simulation, rhetoric, and the scientific imagination how virtual evidence shapes science in the making and in the news

    CERN Document Server

    Roundtree, Aimee Kendall

    2013-01-01

    Computer simulations help advance climatology, astrophysics, and other scientific disciplines. They are also at the crux of several high-profile cases of science in the news. How do simulation scientists, with little or no direct observations, make decisions about what to represent? What is the nature of simulated evidence, and how do we evaluate its strength? Aimee Kendall Roundtree suggests answers in Computer Simulation, Rhetoric, and the Scientific Imagination. She interprets simulations in the sciences by uncovering the argumentative strategies that underpin the production and disseminati

  4. Transport modeling and advanced computer techniques

    International Nuclear Information System (INIS)

    A workshop was held at the University of Texas in June 1988 to consider the current state of transport codes and whether improved user interfaces would make the codes more usable and accessible to the fusion community. Also considered was the possibility that a software standard could be devised to ease the exchange of routines between groups. It was noted that two of the major obstacles to exchanging routines now are the variety of geometrical representation and choices of units. While the workshop formulated no standards, it was generally agreed that good software engineering would aid in the exchange of routines, and that a continued exchange of ideas between groups would be worthwhile. It seems that before we begin to discuss software standards we should review the current state of computer technology --- both hardware and software to see what influence recent advances might have on our software goals. This is done in this paper

  5. Advanced proton imaging in computed tomography

    CERN Document Server

    Mattiazzo, S; Giubilato, P; Pantano, D; Pozzobon, N; Snoeys, W; Wyss, J

    2015-01-01

    In recent years the use of hadrons for cancer radiation treatment has grown in importance, and many facilities are currently operational or under construction worldwide. To fully exploit the therapeutic advantages offered by hadron therapy, precise body imaging for accurate beam delivery is decisive. Proton computed tomography (pCT) scanners, currently in their R&D phase, provide the ultimate 3D imaging for hadrons treatment guidance. A key component of a pCT scanner is the detector used to track the protons, which has great impact on the scanner performances and ultimately limits its maximum speed. In this article, a novel proton-tracking detector was presented that would have higher scanning speed, better spatial resolution and lower material budget with respect to present state-of-the-art detectors, leading to enhanced performances. This advancement in performances is achieved by employing the very latest development in monolithic active pixel detectors (to build high granularity, low material budget, ...

  6. Advanced Test Reactor National Scientific User Facility Progress

    Energy Technology Data Exchange (ETDEWEB)

    Frances M. Marshall; Todd R. Allen; James I. Cole; Jeff B. Benson; Mary Catherine Thelen

    2012-10-01

    The Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL) is one of the world’s premier test reactors for studying the effects of intense neutron radiation on reactor materials and fuels. The ATR began operation in 1967, and has operated continuously since then, averaging approximately 250 operating days per year. The combination of high flux, large test volumes, and multiple experiment configuration options provide unique testing opportunities for nuclear fuels and material researchers. The ATR is a pressurized, light-water moderated and cooled, beryllium-reflected highly-enriched uranium fueled, reactor with a maximum operating power of 250 MWth. The ATR peak thermal flux can reach 1.0 x1015 n/cm2-sec, and the core configuration creates five main reactor power lobes (regions) that can be operated at different powers during the same operating cycle. In addition to these nine flux traps there are 68 irradiation positions in the reactor core reflector tank. The test positions range from 0.5” to 5.0” in diameter and are all 48” in length, the active length of the fuel. The INL also has several hot cells and other laboratories in which irradiated material can be examined to study material radiation effects. In 2007 the US Department of Energy (DOE) designated the ATR as a National Scientific User Facility (NSUF) to facilitate greater access to the ATR and the associated INL laboratories for material testing research by a broader user community. Goals of the ATR NSUF are to define the cutting edge of nuclear technology research in high temperature and radiation environments, contribute to improved industry performance of current and future light water reactors, and stimulate cooperative research between user groups conducting basic and applied research. The ATR NSUF has developed partnerships with other universities and national laboratories to enable ATR NSUF researchers to perform research at these other facilities, when the research objectives

  7. Making Advanced Computer Science Topics More Accessible through Interactive Technologies

    Science.gov (United States)

    Shao, Kun; Maher, Peter

    2012-01-01

    Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advanced computer science topics, such as computer networking, telecommunications and data structures using…

  8. Advancing nuclear technology and research. The advanced test reactor national scientific user facility

    International Nuclear Information System (INIS)

    The Advanced Test Reactor (ATR), at the Idaho National Laboratory (INL), is one of the world's premier test reactors for providing the capability for studying the effects of intense neutron and gamma radiation on reactor materials and fuels. The INL also has several hot cells and other laboratories in which irradiated material can be examined to study material radiation effects. In 2007 the US Department of Energy (DOE) designated the ATR as a National Scientific User Facility (NSUF) to facilitate greater access to the ATR and the associated INL laboratories for material testing research. The mission of the ATR NSUF is to provide access to world-class facilities, thereby facilitating the advancement of nuclear science and technology. Cost free access to the ATR, INL post irradiation examination facilities, and partner facilities is granted based on technical merit to U.S. university-led experiment teams conducting non-proprietary research. Proposals are selected via independent technical peer review and relevance to United States Department of Energy. To increase overall research capability, ATR NSUF seeks to form strategic partnerships with university facilities that add significant nuclear research capability to the ATR NSUF and are accessible to all ATR NSUF users. (author)

  9. Advanced Test Reactor National Scientific User Facility 2010 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Mary Catherine Thelen; Todd R. Allen

    2011-05-01

    This is the 2010 ATR National Scientific User Facility Annual Report. This report provides an overview of the program for 2010, along with individual project reports from each of the university principal investigators. The report also describes the capabilities offered to university researchers here at INL and at the ATR NSUF partner facilities.

  10. Final Scientific Report - Wireless and Sensing Solutions Advancing Industrial Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Budampati, Rama; McBrady, Adam; Nusseibeh, Fouad

    2009-09-28

    The project team's goal for the Wireless and Sensing Solution Advancing Industrial Efficiency award (DE-FC36-04GO14002) was to develop, demonstrate, and test a number of leading edge technologies that could enable the emergence of wireless sensor and sampling systems for the industrial market space. This effort combined initiatives in advanced sensor development, configurable sampling and deployment platforms, and robust wireless communications to address critical obstacles in enabling enhanced industrial efficiency.

  11. PS3 CELL Development for Scientific Computation and Research

    Science.gov (United States)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  12. I - Template Metaprogramming for Massively Parallel Scientific Computing - Expression Templates

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty. These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods i...

  13. Grid Computing in the Collider Detector at Fermilab (CDF) scientific experiment

    OpenAIRE

    Benjamin, Douglas P.

    2008-01-01

    The computing model for the Collider Detector at Fermilab (CDF) scientific experiment has evolved since the beginning of the experiment. Initially CDF computing was comprised of dedicated resources located in computer farms around the world. With the wide spread acceptance of grid computing in High Energy Physics, CDF computing has migrated to using grid computing extensively. CDF uses computing grids around the world. Each computing grid has required different solutions. The use of portals a...

  14. DOE Advanced Scientific Advisory Committee (ASCAC): Workforce Subcommittee Letter

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Barbara [University of Houston; Calandra, Henri [Total SA; Crivelli, Silvia [Lawrence Berkeley National Laboratory, University of California, Davis; Dongarra, Jack [University of Tennessee; Hittinger, Jeffrey [Lawrence Livermore National Laboratory; Lathrop, Scott A. [NCSA, University of Illinois Urbana-Champaign; Sarkar, Vivek [Rice University; Stahlberg, Eric [Advanced Biomedical Computing Center; Vetter, Jeffrey S. [Oak Ridge National Laboratory; Williams, Dean [Lawrence Livermore National Laboratory

    2014-07-23

    Simulation and computing are essential to much of the research conducted at the DOE national laboratories. Experts in the ASCR ¬relevant Computing Sciences, which encompass a range of disciplines including Computer Science, Applied Mathematics, Statistics and domain Computational Sciences, are an essential element of the workforce in nearly all of the DOE national laboratories. This report seeks to identify the gaps and challenges facing DOE with respect to this workforce. This letter is ASCAC’s response to the charge of February 19, 2014 to identify disciplines in which significantly greater emphasis in workforce training at the graduate or postdoctoral levels is necessary to address workforce gaps in current and future Office of Science mission needs.

  15. Advanced Test Reactor National Scientific User Facility: Addressing advanced nuclear materials research

    Energy Technology Data Exchange (ETDEWEB)

    John Jackson; Todd Allen; Frances Marshall; Jim Cole

    2013-03-01

    The Advanced Test Reactor National Scientific User Facility (ATR NSUF), based at the Idaho National Laboratory in the United States, is supporting Department of Energy and industry research efforts to ensure the properties of materials in light water reactors are well understood. The ATR NSUF is providing this support through three main efforts: establishing unique infrastructure necessary to conduct research on highly radioactive materials, conducting research in conjunction with industry partners on life extension relevant topics, and providing training courses to encourage more U.S. researchers to understand and address LWR materials issues. In 2010 and 2011, several advanced instruments with capability focused on resolving nuclear material performance issues through analysis on the micro (10-6 m) to atomic (10-10 m) scales were installed primarily at the Center for Advanced Energy Studies (CAES) in Idaho Falls, Idaho. These instruments included a local electrode atom probe (LEAP), a field-emission gun scanning transmission electron microscope (FEG-STEM), a focused ion beam (FIB) system, a Raman spectrometer, and an nanoindentor/atomic force microscope. Ongoing capability enhancements intended to support industry efforts include completion of two shielded, irradiation assisted stress corrosion cracking (IASCC) test loops, the first of which will come online in early calendar year 2013, a pressurized and controlled chemistry water loop for the ATR center flux trap, and a dedicated facility intended to house post irradiation examination equipment. In addition to capability enhancements at the main site in Idaho, the ATR NSUF also welcomed two new partner facilities in 2011 and two new partner facilities in 2012; the Oak Ridge National Laboratory, High Flux Isotope Reactor (HFIR) and associated hot cells and the University California Berkeley capabilities in irradiated materials analysis were added in 2011. In 2012, Purdue University’s Interaction of Materials

  16. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Bramley, Randall B.

    2012-08-02

    Indiana University’s SWIM activities have primarily been in three areas. All are completed, but we are continuing to work on two of them because refinements are useful to both DoE laboratories and the high performance computing community.

  17. 78 FR 50404 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2013-08-19

    ... least five business days prior to the meeting. Reasonable provisions will be made to include the... facilitate the orderly conduct of business. Public comment will follow the 10-minute rule. Minutes:...

  18. 76 FR 9765 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2011-02-22

    .... Technical talks on exascale relevant research. ASCAC Committee of Visitors (COV) update and new business....gov . You must make your request for an oral statement at least 5 business days prior to the meeting... the Committee will conduct the meeting to facilitate the orderly conduct of business. Public ]...

  19. 76 FR 45786 - Advanced Scientific Computing Advisory Committee; Meeting

    Science.gov (United States)

    2011-08-01

    ... business days prior to the meeting. Reasonable provision will be made to include the scheduled oral... that will facilitate the orderly conduct of business. Public comment will follow the 10-minute...

  20. 77 FR 12823 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2012-03-02

    ...@science.doe.gov . You must make your request for an oral statement at least 5 business days prior to the... Chairperson of the Committee will conduct the meeting to facilitate the orderly conduct of business....

  1. 75 FR 43518 - Advanced Scientific Computing Advisory Committee; Meeting

    Science.gov (United States)

    2010-07-26

    ....gov ). You must make your request for an oral statement at least 5 business days prior to the meeting... the Committee will conduct the meeting to facilitate the orderly conduct of business. Public...

  2. 75 FR 57742 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2010-09-22

    ... least 5 business days prior to the meeting. Reasonable provision will be made to include the scheduled... the orderly conduct of business. Public comment will follow the 10-minute rule. This notice is...

  3. 76 FR 64330 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2011-10-18

    ... oral statement at least 5 business days prior to the meeting. Reasonable provision will be made to... meeting to facilitate the orderly conduct of business. Public comment will follow the 10-minute...

  4. 78 FR 6087 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2013-01-29

    ... . You must make your request for an oral statement at least five business days prior to the meeting... the Committee will conduct the meeting to facilitate the orderly conduct of business. Public...

  5. 75 FR 9887 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2010-03-04

    ... least 5 business days prior to the meeting. Reasonable provision will be made to include the scheduled... the orderly conduct of business. Public comment will follow the 10-minute rule. Minutes: The...

  6. 78 FR 64931 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2013-10-30

    ...@science.doe.gov ). You must make your request for an oral statement at least 5 business days prior to the... Chairperson of the Committee will conduct the meeting to facilitate the orderly conduct of business....

  7. 75 FR 64720 - DOE/Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2010-10-20

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY DOE...: Melea.Baker@science.doe.gov ). SUPPLEMENTARY INFORMATION: Purpose of the Meeting: The purpose of this... Baker via FAX at 301-903-4846 or via e-mail ( Melea.Baker@science.doe.gov ). You must make your...

  8. 76 FR 31945 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2011-06-02

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY... Register. DATES: Thursday, June 23, 2011, 2 p.m. to 4 p.m. ADDRESSES: The meeting is open to the public. To access the call: 1. Dial Toll-Free Number: 866-740-1260 (U.S. & Canada). 2. International...

  9. jsGraph and jsNMR—Advanced Scientific Charting

    Directory of Open Access Journals (Sweden)

    Norman Pellet

    2014-09-01

    Full Text Available The jsGraph library is a versatile javascript library that allows advanced charting to be rendered interactively in web browsers without relying on server-side image processing. jsGraph is released under the MIT license and is free of charge. While being highly customizable through an intuitive javascript API, jsGraph is optimized to render a large quantity of data in a short amount of time. jsGraphs can display line, scatter, contour or zone series. Examples can be consulted on the project home page [1]. Customization of the chart, its axis and its series is achieved through simple but comprehensive JSON configurations.

  10. Scientific Society Partnerships & Effective Strategies for Advancing Policy Objectives

    Science.gov (United States)

    Hammer, P. W.; Greenamoyer, J.

    2012-12-01

    From the perspective of Congress, science is just another interest group that seeks a generous slice of an increasingly shrinking federal budget pie. Traditionally, the science community has not been effective at lobbying for the legislative advances and federal appropriations that enable the R&D enterprise. However, over the last couple decades, science societies have become more strategic in their outreach to Congress and the President. Indeed, many societies have lobbyists on staff, many of whom have a background in science. Yet, while science societies are beginning to be more effective as a political interest group, their members have been much slower to come around to this perspective as an important component of their professional lives. In this talk, we will illustrate how the American Institute of Physics partners with AGU and other science societies to identify joint policy priorities and then reach out to Congress and the President to advance these priorities. The biggest issue is funding for R&D, but science education is increasingly important as is other issues such as publishing policy. We will draw from a number examples, such as the NSF budget, funding for Pu-238, K-12 physical science education policy, and Open Access to illustrate how partnerships work and how scientists can be engaged as powerful political actors in the process.

  11. Advancing the scientific basis of trivalent actinide-lanthanide separations

    International Nuclear Information System (INIS)

    For advanced fuel cycles designed to support transmutation of transplutonium actinides, several options have been demonstrated for process-scale aqueous separations for U, Np, Pu management and for partitioning of trivalent actinides and fission product lanthanides away from other fission products. The more difficult mutual separation of Am/Cm from La-Tb remains the subject of considerable fundamental and applied research. The chemical separations literature teaches that the most productive alternatives to pursue are those based on ligand donor atoms less electronegative than O, specifically N- and S-containing complexants and chloride ion (Cl-). These 'soft-donor' atoms have exhibited usable selectivity in their bonding interactions with trivalent actinides relative to lanthanides. In this report, selected features of soft donor reagent design, characterization and application development will be discussed. The roles of thiocyanate, aminopoly-carboxylic acids and lactate in separation processes are detailed. (authors)

  12. Large-scale computation at PSI scientific achievements and future requirements

    International Nuclear Information System (INIS)

    Computational modelling and simulation are among the disciplines that have seen the most dramatic growth in capabilities in the 2Oth Century. Within the past two decades, scientific computing has become an important contributor to all scientific research programs. Computational modelling and simulation are particularly indispensable for solving research problems that are unsolvable by traditional theoretical and experimental approaches, hazardous to study, or time consuming or expensive to solve by traditional means. Many such research areas are found in PSI's research portfolio. Advances in computing technologies (including hardware and software) during the past decade have set the stage for a major step forward in modelling and simulation. We have now arrived at a situation where we have a number of otherwise unsolvable problems, where simulations are as complex as the systems under study. In 2008 the High-Performance Computing (HPC) community entered the petascale area with the heterogeneous Opteron/Cell machine, called Road Runner built by IBM for the Los Alamos National Laboratory. We are on the brink of a time where the availability of many hundreds of thousands of cores will open up new challenging possibilities in physics, algorithms (numerical mathematics) and computer science. However, to deliver on this promise, it is not enough to provide 'peak' performance in terms of peta-flops, the maximum theoretical speed a computer can attain. Most important, this must be translated into corresponding increase in the capabilities of scientific codes. This is a daunting problem that can only be solved by increasing investment in hardware, in the accompanying system software that enables the reliable use of high-end computers, in scientific competence i.e. the mathematical (parallel) algorithms that are the basis of the codes, and education. In the case of Switzerland, the white paper 'Swiss National Strategic Plan for High Performance Computing and Networking

  13. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    Computational modelling and simulation are among the disciplines that have seen the most dramatic growth in capabilities in the 2Oth Century. Within the past two decades, scientific computing has become an important contributor to all scientific research programs. Computational modelling and simulation are particularly indispensable for solving research problems that are unsolvable by traditional theoretical and experimental approaches, hazardous to study, or time consuming or expensive to solve by traditional means. Many such research areas are found in PSI's research portfolio. Advances in computing technologies (including hardware and software) during the past decade have set the stage for a major step forward in modelling and simulation. We have now arrived at a situation where we have a number of otherwise unsolvable problems, where simulations are as complex as the systems under study. In 2008 the High-Performance Computing (HPC) community entered the petascale area with the heterogeneous Opteron/Cell machine, called Road Runner built by IBM for the Los Alamos National Laboratory. We are on the brink of a time where the availability of many hundreds of thousands of cores will open up new challenging possibilities in physics, algorithms (numerical mathematics) and computer science. However, to deliver on this promise, it is not enough to provide 'peak' performance in terms of peta-flops, the maximum theoretical speed a computer can attain. Most important, this must be translated into corresponding increase in the capabilities of scientific codes. This is a daunting problem that can only be solved by increasing investment in hardware, in the accompanying system software that enables the reliable use of high-end computers, in scientific competence i.e. the mathematical (parallel) algorithms that are the basis of the codes, and education. In the case of Switzerland, the white paper 'Swiss National Strategic Plan for High Performance Computing

  14. Advanced Technologies, Embedded and Multimedia for Human-Centric Computing

    CERN Document Server

    Chao, Han-Chieh; Deng, Der-Jiunn; Park, James; HumanCom and EMC 2013

    2014-01-01

    The theme of HumanCom and EMC are focused on the various aspects of human-centric computing for advances in computer science and its applications, embedded and multimedia computing and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. And the theme of EMC (Advanced in Embedded and Multimedia Computing) is focused on the various aspects of embedded system, smart grid, cloud and multimedia computing, and it provides an opportunity for academic, industry professionals to discuss the latest issues and progress in the area of embedded and multimedia computing. Therefore this book will be include the various theories and practical applications in human-centric computing and embedded and multimedia computing.

  15. Climate Solutions based on advanced scientific discoveries of Allatra physics

    Directory of Open Access Journals (Sweden)

    Vershigora Valery

    2016-05-01

    Full Text Available Global climate change is one of the most important international problems of the 21st century. The overall rapid increase in the dynamics of cataclysms, which have been observed in recent decades, is particularly alarming. Howdo modern scientists predict the occurrence of certain events? In meteorology, unusually powerful cumulonimbus clouds are one of the main conditions for the emergence of a tornado. The former, in their turn, are formed during the invasion of cold air on the overheated land surface. The satellite captures the cloud front, and, based on these pictures, scientists make assumptions about the possibility of occurrence of the respective natural phenomena. In fact, mankind visually observes and draws conclusions about the consequences of the physical phenomena which have already taken place in the invisible world, so the conclusions of scientists are assumptions by their nature, rather than precise knowledge of the causes of theorigin of these phenomena in the physics of microcosm. The latest research in the field of the particle physics and neutrino astrophysics, which was conducted by a working team of scientists of ALLATRA International Public Movement (hereinafter ALLATRA SCIENCE groupallatra-science.org, last accessed 10 April 2016., offers increased opportunities for advanced fundamental and applied research in climatic engineering.

  16. Cuba: the strategic choice of advanced scientific development, 1959-2014

    CERN Document Server

    Baracca, Angelo

    2016-01-01

    Cuba is continuing attracting the attention of the international scientific community for some important and unexpected achievements in applied science such as health biotechnology. They represent outcomes of the 1959 decision of Cuba to develop an advanced scientific system in order to address the most urgent problems for the development of the country and to overcome the condition of subalternity. This ambitious objective was tackled in a very original way, making a broad and wide-ranging recourse to every effective support and collaboration, with Soviet but also Western scientists and institutions, in addition to a peculiar Cuban inventiveness. Indeed, immediately after the revolution, Cuba developed an advanced and articulated scientific system, and achieved a level of excellence in leading scientific fields, like biotechnology, quite independently from the Soviet Union, which was behind in this field. Even the collapse of the Soviet Union in the early 1990s, that could have put the achievements of the Re...

  17. Transonic wing analysis using advanced computational methods

    Science.gov (United States)

    Henne, P. A.; Hicks, R. M.

    1978-01-01

    This paper discusses the application of three-dimensional computational transonic flow methods to several different types of transport wing designs. The purpose of these applications is to evaluate the basic accuracy and limitations associated with such numerical methods. The use of such computational methods for practical engineering problems can only be justified after favorable evaluations are completed. The paper summarizes a study of both the small-disturbance and the full potential technique for computing three-dimensional transonic flows. Computed three-dimensional results are compared to both experimental measurements and theoretical results. Comparisons are made not only of pressure distributions but also of lift and drag forces. Transonic drag rise characteristics are compared. Three-dimensional pressure distributions and aerodynamic forces, computed from the full potential solution, compare reasonably well with experimental results for a wide range of configurations and flow conditions.

  18. Second International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Konar, Amit; Chakraborty, Aruna

    2014-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two-volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 148 scholarly papers, which have been accepted for presentation from over 640 submissions in the second International Conference on Advanced Computing, Networking and Informatics, 2014, held in Kolkata, India during June 24-26, 2014. The first volume includes innovative computing techniques and relevant research results in informatics with selective applications in pattern recognition, signal/image process...

  19. The Advanced Test Reactor National Scientific User Facility Advancing Nuclear Technology

    International Nuclear Information System (INIS)

    To help ensure the long-term viability of nuclear energy through a robust and sustained research and development effort, the U.S. Department of Energy (DOE) designated the Advanced Test Reactor and associated post-irradiation examination facilities a National Scientific User Facility (ATR NSUF), allowing broader access to nuclear energy researchers. The mission of the ATR NSUF is to provide access to world-class nuclear research facilities, thereby facilitating the advancement of nuclear science and technology. The ATR NSUF seeks to create an engaged academic and industrial user community that routinely conducts reactor-based research. Cost free access to the ATR and PIE facilities is granted based on technical merit to U.S. university-led experiment teams conducting non-proprietary research. Proposals are selected via independent technical peer review and relevance to DOE mission. Extensive publication of research results is expected as a condition for access. During FY 2008, the first full year of ATR NSUF operation, five university-led experiments were awarded access to the ATR and associated post-irradiation examination facilities. The ATR NSUF has awarded four new experiments in early FY 2009, and anticipates awarding additional experiments in the fall of 2009 as the results of the second 2009 proposal call. As the ATR NSUF program mature over the next two years, the capability to perform irradiation research of increasing complexity will become available. These capabilities include instrumented irradiation experiments and post-irradiation examinations on materials previously irradiated in U.S. reactor material test programs. The ATR critical facility will also be made available to researchers. An important component of the ATR NSUF an education program focused on the reactor-based tools available for resolving nuclear science and technology issues. The ATR NSUF provides education programs including a summer short course, internships, faculty-student team

  20. The advanced test reactor national scientific user facility advancing nuclear technology

    International Nuclear Information System (INIS)

    To help ensure the long-term viability of nuclear energy through a robust and sustained research and development effort, the U.S. Department of Energy (DOE) designated the Advanced Test Reactor and associated post-irradiation examination facilities a National Scientific User Facility (ATR NSUF), allowing broader access to nuclear energy researchers. The mission of the ATR NSUF is to provide access to world-class nuclear research facilities, thereby facilitating the advancement of nuclear science and technology. The ATR NSUF seeks to create an engaged academic and industrial user community that routinely conducts reactor-based research. Cost free access to the ATR and PIE facilities is granted based on technical merit to U.S. university-led experiment teams conducting non-proprietary research. Proposals are selected via independent technical peer review and relevance to DOE mission. Extensive publication of research results is expected as a condition for access. During FY 2008, the first full year of ATR NSUF operation, five university-led experiments were awarded access to the ATR and associated post-irradiation examination facilities. The ATR NSUF has awarded four new experiments in early FY 2009, and anticipates awarding additional experiments in the fall of 2009 as the results of the second 2009 proposal call. As the ATR NSUF program mature over the next two years, the capability to perform irradiation research of increasing complexity will become available. These capabilities include instrumented irradiation experiments and post-irradiation examinations on materials previously irradiated in U.S. reactor material test programs. The ATR critical facility will also be made available to researchers. An important component of the ATR NSUF an education program focused on the reactor-based tools available for resolving nuclear science and technology issues. The ATR NSUF provides education programs including a summer short course, internships, faculty-student team

  1. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  2. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  3. Computing Algorithms for Nuffield Advanced Physics.

    Science.gov (United States)

    Summers, M. K.

    1978-01-01

    Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)

  4. Aerodynamic optimization studies on advanced architecture computers

    Science.gov (United States)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  5. Scientific and high-performance computing at FAIR

    Directory of Open Access Journals (Sweden)

    Kisel Ivan

    2015-01-01

    Full Text Available Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.

  6. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  7. Advanced Metamorphic Techniques in Computer Viruses

    OpenAIRE

    Beaucamps, Philippe

    2007-01-01

    Nowadays viruses use polymorphic techniques to mutate their code on each replication, thus evading detection by antiviruses. However detection by emulation can defeat simple polymorphism: thus metamorphic techniques are used which thoroughly change the viral code, even after decryption. We briefly detail this evolution of virus protection techniques against detection and then study the MetaPHOR virus, today's most advanced metamorphic virus.

  8. Preface: Special issue: ten years of advances in computer entertainment

    NARCIS (Netherlands)

    Katayose, Haruhiro; Reidsma, Dennis; Rauterberg, M

    2014-01-01

    This special issue celebrates the 10th edition of the International Conference on Advances in Computer Entertainment (ACE) by collecting six selected and revised papers from among this year’s accepted contributions.

  9. Scholarly literature and the press: scientific impact and social perception of physics computing

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2014-01-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the scientific impact and social perception of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing would be beneficial to the high energy physics community.

  10. 3rd International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Chaki, Nabendu

    2016-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 132 scholarly articles, which have been accepted for presentation from over 550 submissions in the Third International Conference on Advanced Computing, Networking and Informatics, 2015, held in Bhubaneswar, India during June 23–25, 2015.

  11. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  12. Advanced Computing Tools and Models for Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  13. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Prowell, Stacy J [ORNL; Symons, Christopher T [ORNL

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  14. New Chicago-Indiana computer network will handle dataflow from world's largest scientific experiment

    CERN Multimedia

    2006-01-01

    "Massive quantities of data will soon begin flowing from the largest scientific instrument ever built into an international netword of computer centers, including one operated jointly by the University of Chicago and Indiana University." (1,5 page)

  15. Advances in Computing and Information Technology : Proceedings of the Second International Conference on Advances in Computing and Information Technology

    CERN Document Server

    Nagamalai, Dhinaharan; Chaki, Nabendu

    2013-01-01

    The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, ...

  16. Advances in Computer Science and Education

    CERN Document Server

    Huang, Xiong

    2012-01-01

    CSE2011 is an integrated conference concentration its focus on computer science and education. In the proceeding, you can learn much more knowledge about computer science and education of researchers from all around the world. The main role of the proceeding is to be used as an exchange pillar for researchers who are working in the mentioned fields. In order to meet the high quality of Springer, AISC series, the organization committee has made their efforts to do the following things. Firstly, poor quality paper has been refused after reviewing course by anonymous referee experts. Secondly, periodically review meetings have been held around the reviewers about five times for exchanging reviewing suggestions. Finally, the conference organizers had several preliminary sessions before the conference. Through efforts of different people and departments, the conference will be successful and fruitful

  17. Advances in Computer-Based Autoantibodies Analysis

    Science.gov (United States)

    Soda, Paolo; Iannello, Giulio

    Indirect Immunofluorescence (IIF) imaging is the recommended me-thod to detect autoantibodies in patient serum, whose common markers are antinuclear autoantibodies (ANA) and autoantibodies directed against double strand DNA (anti-dsDNA). Since the availability of accurately performed and correctly reported laboratory determinations is crucial for the clinicians, an evident medical demand is the development of Computer Aided Diagnosis (CAD) tools supporting physicians' decisions.

  18. Advances in Neurotechnology for Brain Computer Interfaces

    OpenAIRE

    Fazli, Siamac

    2011-01-01

    Gehirn Computer Schnittstellen haben in den letzten 10 Jahren ein enormes wissenschaftliches Interesse hervorgerufen. Allerdings offenbart diese spannende Technology bei näherer Betrachtung noch einige Hürden, welche bisher die Entwicklung von massentauglichen Anwendungen verhindert haben. Unter Anderem eine lange Vorbereitungszeit eines BCI Systems, die fehlende Steuermöglichkeiten für manche Benutzer, sowie die nicht Stationaritäten innerhalb einer Aufnahme. Diese Dissertation führt eine Re...

  19. Recent Advances in Computer Engineering and Applications

    OpenAIRE

    Manoj Jha, Stephen Lagakos Leonid Perlovsky; Covaci, Brindusa; Nikos Mastorakis, Azami Zaharim

    2010-01-01

    This year the 4th WSEAS International Conference on COMPUTER ENGINEERING and APPLICATIONS (CEA '10) was held at Harvard University, Cambridge, USA, January 27-29, 2010. The conference remains faithful to its original idea of providing a platform to discuss network architecture, network design software, mobile networks and mobile services, digital broadcasting, e-commerce, optical networks, hacking, trojian horses, viruses, worms, spam, information security, standards of information security: ...

  20. Extending the horizons advances in computing, optimization, and decision technologies

    CERN Document Server

    Joseph, Anito; Mehrotra, Anuj; Trick, Michael

    2007-01-01

    Computer Science and Operations Research continue to have a synergistic relationship and this book represents the results of cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS/AI and of the high caliber of research being conducted by members of the INFORMS Computing Society. EXTENDING THE HORIZONS: Advances in Computing, Optimization, and Decision Technologies is a volume that presents the latest, leading research in the design and analysis of algorithms, computational optimization, heuristic search and learning, modeling languages, parallel and distributed computing, simulation, computational logic and visualization. This volume also emphasizes a variety of novel applications in the interface of CS, AI, and OR/MS.

  1. Editorial for the First Workshop on Mining Scientific Papers: Computational Linguistics and Bibliometrics

    OpenAIRE

    Atanassova, Iana; Bertin, Marc; Mayr, Philipp

    2015-01-01

    The workshop "Mining Scientific Papers: Computational Linguistics and Bibliometrics" (CLBib 2015), co-located with the 15th International Society of Scientometrics and Informetrics Conference (ISSI 2015), brought together researchers in Bibliometrics and Computational Linguistics in order to study the ways Bibliometrics can benefit from large-scale text analytics and sense mining of scientific papers, thus exploring the interdisciplinarity of Bibliometrics and Natural Language Processing (NLP...

  2. Intelligent tools for building a scientific information platform advanced architectures and solutions

    CERN Document Server

    Skonieczny, Lukasz; Rybinski, Henryk; Kryszkiewicz, Marzena; Niezgodka, Marek

    2013-01-01

    This book is a selection of results obtained within two years of research per- formed under SYNAT - a nation-wide scientific project aiming at creating an infrastructure for scientific content storage and sharing for academia, education and open knowledge society in Poland. The selection refers to the research in artificial intelligence, knowledge discovery and data mining, information retrieval and natural language processing, addressing the problems of implementing intelligent tools for building a scientific information platform.This book is a continuation and extension of the ideas presented in “Intelligent Tools for Building a Scientific Information Platform” published as volume 390 in the same series in 2012. It is based on the SYNAT 2012 Workshop held in Warsaw. The papers included in this volume present an overview and insight into information retrieval, repository systems, text processing, ontology-based systems, text mining, multimedia data processing and advanced software engineering.  

  3. Proceedings of International Conference on Advances in Computing

    CERN Document Server

    R, Selvarani; Kumar, T

    2012-01-01

    This is the first International Conference on Advances in Computing (ICAdC-2012). The scope of the conference includes all the areas of Theoretical Computer Science, Systems and Software, and Intelligent Systems. Conference Proceedings is a culmination of research results, papers and the theory related to all the three major areas of computing, i.e., Theoretical Computer Science, Systems and Software, and Intelligent Systems. Helps budding researchers, graduates in the areas of Computer Science, Information science, Electronics, Telecommunication, Instrumentation, Networking to take forward their research work based on the reviewed results in the paper,  by mutual interaction through E-mail contacts in the proceedings.

  4. StratOS: A Big Data Framework for Scientific Computing

    CERN Document Server

    Stickley, Nathaniel R

    2015-01-01

    We introduce StratOS, a Big Data platform for general computing that allows a datacenter to be treated as a single computer. With StratOS, the process of writing a massively parallel program for a datacenter is no more complicated than writing a Python script for a desktop computer. Users can run pre-existing analysis software on data distributed over thousands of machines with just a few keystrokes. This greatly reduces the time required to develop distributed data analysis pipelines. The platform is built upon industry-standard, open-source Big Data technologies, from which it inherits fast data throughput and fault tolerance. StratOS enhances these technologies by adding an intuitive user interface, automated task monitoring, and other usability features.

  5. Development of a software system (STA: Seamless Thinking Aid) for distributed parallel scientific computing

    International Nuclear Information System (INIS)

    The Center for Promotion of Computational Science and Engineering in JAERI establishes the science technical computing environment STA as a part of R and D on common basic technology on parallel processing. The STA targets for a new style of scientific computation called the distributed parallel scientific computing, and is an environment supporting a seamless thinking of users by realizing lubrication of a series of works from development of program to its practice and result analysis and reduction of consumed time. The Center also establishes some distributed parallel applications on the STA to carry out their practice evaluation. STA and a distributed parallel application established on STA were introduced. (G.K.)

  6. Grid Computing in the Collider Detector at Fermilab (CDF) scientific experiment

    CERN Document Server

    Benjamin, Douglas P

    2008-01-01

    The computing model for the Collider Detector at Fermilab (CDF) scientific experiment has evolved since the beginning of the experiment. Initially CDF computing was comprised of dedicated resources located in computer farms around the world. With the wide spread acceptance of grid computing in High Energy Physics, CDF computing has migrated to using grid computing extensively. CDF uses computing grids around the world. Each computing grid has required different solutions. The use of portals as interfaces to the collaboration computing resources has proven to be an extremely useful technique allowing the CDF physicists transparently migrate from using dedicated computer farm to using computing located in grid farms often away from Fermilab. Grid computing at CDF continues to evolve as the grid standards and practices change.

  7. A look back: 57 years of scientific computing

    DEFF Research Database (Denmark)

    Wasniewski, Jerzy

    This document outlines my 57-year career in computational mathematics, a career that took me from Poland to Canada and finally to Denmark. It of course spans a period in which both hardware and software developed enormously. Along the way I was fortunate to be faced with fascinating technical cha...... challenges and privileged to be able to share them with inspiring colleagues. From the beginning, my work to a great extent was concerned, directly or indirectly, with computational linear algebra, an interest I maintain even today....

  8. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  9. Parallel computing in genomic research: advances and applications

    OpenAIRE

    Ocaña K; de Oliveira D

    2015-01-01

    Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques...

  10. Parallel computing in genomic research: advances and applications

    OpenAIRE

    Oliveira, Daniel De

    2015-01-01

    Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations...

  11. Advances in computational fluid dynamics solvers for modern computing environments

    Science.gov (United States)

    Hertenstein, Daniel; Humphrey, John R.; Paolini, Aaron L.; Kelmelis, Eric J.

    2013-05-01

    EM Photonics has been investigating the application of massively multicore processors to a key problem area: Computational Fluid Dynamics (CFD). While the capabilities of CFD solvers have continually increased and improved to support features such as moving bodies and adjoint-based mesh adaptation, the software architecture has often lagged behind. This has led to poor scaling as core counts reach the tens of thousands. In the modern High Performance Computing (HPC) world, clusters with hundreds of thousands of cores are becoming the standard. In addition, accelerator devices such as NVIDIA GPUs and Intel Xeon Phi are being installed in many new systems. It is important for CFD solvers to take advantage of the new hardware as the computations involved are well suited for the massively multicore architecture. In our work, we demonstrate that new features in NVIDIA GPUs are able to empower existing CFD solvers by example using AVUS, a CFD solver developed by the Air Force Research Labratory (AFRL) and the Volcanic Ash Advisory Center (VAAC). The effort has resulted in increased performance and scalability without sacrificing accuracy. There are many well-known codes in the CFD space that can benefit from this work, such as FUN3D, OVERFLOW, and TetrUSS. Such codes are widely used in the commercial, government, and defense sectors.

  12. Computer Hardware, Advanced Mathematics and Model Physics pilot project final report

    International Nuclear Information System (INIS)

    The Computer Hardware, Advanced Mathematics and Model Physics (CHAMMP) Program was launched in January, 1990. A principal objective of the program has been to utilize the emerging capabilities of massively parallel scientific computers in the challenge of regional scale predictions of decade-to-century climate change. CHAMMP has already demonstrated the feasibility of achieving a 10,000 fold increase in computational throughput for climate modeling in this decade. What we have also recognized, however, is the need for new algorithms and computer software to capitalize on the radically new computing architectures. This report describes the pilot CHAMMP projects at the DOE National Laboratories and the National Center for Atmospheric Research (NCAR). The pilot projects were selected to identify the principal challenges to CHAMMP and to entrain new scientific computing expertise. The success of some of these projects has aided in the definition of the CHAMMP scientific plan. Many of the papers in this report have been or will be submitted for publication in the open literature. Readers are urged to consult with the authors directly for questions or comments about their papers

  13. Advances in neural networks computational intelligence for ICT

    CERN Document Server

    Esposito, Anna; Morabito, Francesco; Pasero, Eros

    2016-01-01

    This carefully edited book is putting emphasis on computational and artificial intelligent methods for learning and their relative applications in robotics, embedded systems, and ICT interfaces for psychological and neurological diseases. The book is a follow-up of the scientific workshop on Neural Networks (WIRN 2015) held in Vietri sul Mare, Italy, from the 20th to the 22nd of May 2015. The workshop, at its 27th edition became a traditional scientific event that brought together scientists from many countries, and several scientific disciplines. Each chapter is an extended version of the original contribution presented at the workshop, and together with the reviewers’ peer revisions it also benefits from the live discussion during the presentation. The content of book is organized in the following sections. 1. Introduction, 2. Machine Learning, 3. Artificial Neural Networks: Algorithms and models, 4. Intelligent Cyberphysical and Embedded System, 5. Computational Intelligence Methods for Biomedical ICT in...

  14. Interview with Sergio Bertolucci, Director for Research and Scientific Computing

    CERN Multimedia

    CERN Video Productions

    2009-01-01

    Questions : 1. How do you feel as Director for Research at a moment when the LHC is ready to produce data for the first time? 2. Is 3.5 Tev per beam enough for the very large community of physicists expecting data for a year now? 3. What do you expect as a physicist and what would you wish to find at this energy? 4. Do you think the Tevatron at FERMILAB still has a chance to get interesting results before the LHC? 5. How complex is it to run the LHC and how difficult is the data taking? 6. What is the historical importance of the LHC in scientific research in general? 7. What spin offs on society can we expect from the LHC?

  15. Intelligent Software Tools for Advanced Computing

    Energy Technology Data Exchange (ETDEWEB)

    Baumgart, C.W.

    2001-04-03

    Feature extraction and evaluation are two procedures common to the development of any pattern recognition application. These features are the primary pieces of information which are used to train the pattern recognition tool, whether that tool is a neural network, a fuzzy logic rulebase, or a genetic algorithm. Careful selection of the features to be used by the pattern recognition tool can significantly streamline the overall development and training of the solution for the pattern recognition application. This report summarizes the development of an integrated, computer-based software package called the Feature Extraction Toolbox (FET), which can be used for the development and deployment of solutions to generic pattern recognition problems. This toolbox integrates a number of software techniques for signal processing, feature extraction and evaluation, and pattern recognition, all under a single, user-friendly development environment. The toolbox has been developed to run on a laptop computer, so that it may be taken to a site and used to develop pattern recognition applications in the field. A prototype version of this toolbox has been completed and is currently being used for applications development on several projects in support of the Department of Energy.

  16. AVES: A high performance computer cluster array for the INTEGRAL satellite scientific data analysis

    Science.gov (United States)

    Federici, Memmo; Martino, Bruno Luigi; Ubertini, Pietro

    2012-07-01

    In this paper we describe a new computing system array, designed, built and now used at the Space Astrophysics and Planetary Institute (IAPS) in Rome, Italy, for the INTEGRAL Space Observatory scientific data analysis. This new system has become necessary in order to reduce the processing time of the INTEGRAL data accumulated during the more than 9 years of in-orbit operation. In order to fulfill the scientific data analysis requirements with a moderately limited investment the starting approach has been to use a `cluster' array of commercial quad-CPU computers, featuring the extremely large scientific and calibration data archive on line.

  17. Computational neuroscience for advancing artificial intelligence

    Directory of Open Access Journals (Sweden)

    Fernando P. Ponce

    2011-07-01

    Full Text Available resumen del libro de Alonso, E. y Mondragón, E. (2011. Hershey, NY: Medical Information Science Reference. La neurociencia como disciplinapersigue el entendimiento del cerebro y su relación con el funcionamiento de la mente a través del análisis de la comprensión de la interacción de diversos procesos físicos, químicos y biológicos (Bassett & Gazzaniga, 2011. Por otra parte, numerosas disciplinasprogresivamente han realizado significativas contribuciones en esta empresa tales como la matemática, la psicología o la filosofía, entre otras. Producto de este esfuerzo, es que junto con la neurociencia tradicional han aparecido disciplinas complementarias como la neurociencia cognitiva, la neuropsicología o la neurocienciacomputacional (Bengio, 2007; Dayan & Abbott, 2005. En el contexto de la neurociencia computacional como disciplina complementaria a laneurociencia tradicional. Alonso y Mondragón (2011 editan el libroComputacional Neuroscience for Advancing Artificial Intelligence: Models, Methods and Applications.

  18. New challenges in grid generation and adaptivity for scientific computing

    CERN Document Server

    Formaggia, Luca

    2015-01-01

    This volume collects selected contributions from the “Fourth Tetrahedron Workshop on Grid Generation for Numerical Computations”, which was held in Verbania, Italy in July 2013. The previous editions of this Workshop were hosted by the Weierstrass Institute in Berlin (2005), by INRIA Rocquencourt in Paris (2007), and by Swansea University (2010). This book covers different, though related, aspects of the field: the generation of quality grids for complex three-dimensional geometries; parallel mesh generation algorithms; mesh adaptation, including both theoretical and implementation aspects; grid generation and adaptation on surfaces – all with an interesting mix of numerical analysis, computer science and strongly application-oriented problems.

  19. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    International Nuclear Information System (INIS)

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs

  20. [Scientific advice by the national and European approval authorities concerning advanced therapy medicinal products].

    Science.gov (United States)

    Jost, Nils; Schüssler-Lenz, Martina; Ziegele, Bettina; Reinhardt, Jens

    2015-11-01

    The aim of scientific advice is to support pharmaceutical developers in regulatory and scientific questions, thus facilitating the development of safe and efficacious new medicinal products. Recent years have shown that the development of advanced therapy medicinal products (ATMPs) in particular needs a high degree of regulatory support. On one hand, this is related to the complexity and heterogeneity of this group of medicinal products and on the other hand due to the fact that mainly academic research institutions and small- and medium-sized enterprises (SMEs) are developing ATMPs. These often have limited regulatory experience and resources. In 2009 the Paul-Ehrlich-Institut (PEI) initiated the Innovation Office as a contact point for applicants developing ATMPs. The mandate of the Innovation Office is to provide support on regulatory questions and to coordinate national scientific advice meetings concerning ATMPs for every phase in drug development and especially with view to the preparation of clinical trial applications. On the European level, the Scientific Advice Working Party (SAWP) of the Committee for Medicinal Products for Human Use (CHMP) of the European Medicinal Agency (EMA) offers scientific advice. This article describes the concepts of national and EMA scientific advice concerning ATMPs and summarizes the experience of the last six years. PMID:26369763

  1. Advances in FDTD computational electrodynamics photonics and nanotechnology

    CERN Document Server

    Oskooi, Ardavan; Johnson, Steven G

    2013-01-01

    Advances in photonics and nanotechnology have the potential to revolutionize humanity s ability to communicate and compute. To pursue these advances, it is mandatory to understand and properly model interactions of light with materials such as silicon and gold at the nanoscale, i.e., the span of a few tens of atoms laid side by side. These interactions are governed by the fundamental Maxwell s equations of classical electrodynamics, supplemented by quantum electrodynamics. This book presents the current state-of-the-art in formulating and implementing computational models of these interactions. Maxwell s equations are solved using the finite-difference time-domain (FDTD) technique, pioneered by the senior editor, whose prior Artech books in this area are among the top ten most-cited in the history of engineering. You discover the most important advances in all areas of FDTD and PSTD computational modeling of electromagnetic wave interactions. This cutting-edge resource helps you understand the latest develo...

  2. Scientific Grand Challenges: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.; Johnson, Gary M.; Washington, Warren M.

    2009-07-02

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) in partnership with the Office of Advanced Scientific Computing Research (ASCR) held a workshop on the challenges in climate change science and the role of computing at the extreme scale, November 6-7, 2008, in Bethesda, Maryland. At the workshop, participants identified the scientific challenges facing the field of climate science and outlined the research directions of highest priority that should be pursued to meet these challenges. Representatives from the national and international climate change research community as well as representatives from the high-performance computing community attended the workshop. This group represented a broad mix of expertise. Of the 99 participants, 6 were from international institutions. Before the workshop, each of the four panels prepared a white paper, which provided the starting place for the workshop discussions. These four panels of workshop attendees devoted to their efforts the following themes: Model Development and Integrated Assessment; Algorithms and Computational Environment; Decadal Predictability and Prediction; Data, Visualization, and Computing Productivity. The recommendations of the panels are summarized in the body of this report.

  3. Reliability of an Interactive Computer Program for Advance Care Planning

    OpenAIRE

    Schubart, Jane R.; Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J.

    2012-01-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demon...

  4. The Advanced Computational Methods Center, University of Georgia

    OpenAIRE

    Nute, Donald; Covington, Michael; Rankin, Terry

    1986-01-01

    The Advanced Computational Methods Center (ACMC) established at the University of Georgia in 1984, supports several research projects in artificial intelligence. The primary goal of AI research at ACMC is the design and installation of a logic-programming environment with advanced natural language processing and knowledge-acquisition capabilities on the university's highly parallel CYBERPLUS system from Control Data Corporation. This article briefly describes current research projects in arti...

  5. PREFACE: International Scientific Conference of Young Scientists: Advanced Materials in Construction and Engineering (TSUAB2014)

    Science.gov (United States)

    Kopanitsa, Natalia O.

    2015-01-01

    In October 15-17, 2014 International Scientific Conference of Young Scientists: Advanced Materials in Construction and Engineering (TSUAB2014) took place at Tomsk State University of Architecture and Building (Tomsk, Russia). The Conference became a discussion platform for researchers in the fields of studying structure and properties of advanced building materials and included open lectures of leading scientists and oral presentations of master, postgraduate and doctoral students. A special session was devoted to reports of school children who further plan on starting a research career. The Conference included an industrial exhibition where companies displayed the products and services they supply. The companies also gave presentations of their products within the Conference sessions.

  6. Performance Evaluation of Three Distributed Computing Environments for Scientific Applications

    Science.gov (United States)

    Fatoohi, Rod; Weeratunga, Sisira; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    We present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.

  7. Performance evaluation of three distributed computing environments for scientific applications

    Energy Technology Data Exchange (ETDEWEB)

    Fatoohi, R.; Weeratunga, S. [Computer Sciences Corp., Moffett Field, CA (United States)

    1994-12-31

    The authors present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.

  8. Parallel computing in genomic research: advances and applications

    Directory of Open Access Journals (Sweden)

    Ocaña K

    2015-11-01

    Full Text Available Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. Keywords: high-performance computing, genomic research, cloud computing, grid computing, cluster computing, parallel computing

  9. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    Science.gov (United States)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  10. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  11. SIAM Conference on Parallel Processing for Scientific Computing - March 12-14, 2008

    Energy Technology Data Exchange (ETDEWEB)

    None

    2008-09-08

    The themes of the 2008 conference included, but were not limited to: Programming languages, models, and compilation techniques; The transition to ubiquitous multicore/manycore processors; Scientific computing on special-purpose processors (Cell, GPUs, etc.); Architecture-aware algorithms; From scalable algorithms to scalable software; Tools for software development and performance evaluation; Global perspectives on HPC; Parallel computing in industry; Distributed/grid computing; Fault tolerance; Parallel visualization and large scale data management; and The future of parallel architectures.

  12. Building an advanced climate model: Program plan for the CHAMMP (Computer Hardware, Advanced Mathematics, and Model Physics) Climate Modeling Program

    Energy Technology Data Exchange (ETDEWEB)

    1990-12-01

    The issue of global warming and related climatic changes from increasing concentrations of greenhouse gases in the atmosphere has received prominent attention during the past few years. The Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Climate Modeling Program is designed to contribute directly to this rapid improvement. The goal of the CHAMMP Climate Modeling Program is to develop, verify, and apply a new generation of climate models within a coordinated framework that incorporates the best available scientific and numerical approaches to represent physical, biogeochemical, and ecological processes, that fully utilizes the hardware and software capabilities of new computer architectures, that probes the limits of climate predictability, and finally that can be used to address the challenging problem of understanding the greenhouse climate issue through the ability of the models to simulate time-dependent climatic changes over extended times and with regional resolution.

  13. High-Precision Floating-Point Arithmetic in ScientificComputation

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2004-12-31

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required: some of these applications require roughly twice this level; others require four times; while still others require hundreds or more digits to obtain numerically meaningful results. Such calculations have been facilitated by new high-precision software packages that include high-level language translation modules to minimize the conversion effort. These activities have yielded a number of interesting new scientific results in fields as diverse as quantum theory, climate modeling and experimental mathematics, a few of which are described in this article. Such developments suggest that in the future, the numeric precision used for a scientific computation may be as important to the program design as are the algorithms and data structures.

  14. Model-Driven Development for scientific computing. Computations of RHEED intensities for a disordered surface. Part I

    Science.gov (United States)

    Daniluk, Andrzej

    2010-03-01

    Scientific computing is the field of study concerned with constructing mathematical models, numerical solution techniques and with using computers to analyse and solve scientific and engineering problems. Model-Driven Development (MDD) has been proposed as a means to support the software development process through the use of a model-centric approach. This paper surveys the core MDD technology that was used to develop an application that allows computation of the RHEED intensities dynamically for a disordered surface. New version program summaryProgram title: RHEED1DProcess Catalogue identifier: ADUY_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 31 971 No. of bytes in distributed program, including test data, etc.: 3 039 820 Distribution format: tar.gz Programming language: Embarcadero C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 GB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADUY_v3_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2394 Does the new version supersede the previous version?: No Nature of problem: An application that implements numerical simulations should be constructed according to the CSFAR rules: clear and well-documented, simple, fast, accurate, and robust. A clearly written, externally and internally documented program is much easier to understand and modify. A simple program is much less prone to error and is more easily modified than one that is complicated. Simplicity and clarity also help make the program flexible. Making the program fast has economic benefits. It also allows flexibility because some of the features that make a program efficient can be traded off for

  15. Media Articles Describing Advances in Scientific Research as a Vehicle for Student Engagement Fostering Climate Literacy

    Science.gov (United States)

    Brassell, S. C.

    2014-12-01

    "Records of Global Climate Change" enables students to fulfill the science component of an undergraduate distribution requirement in "Critical Approaches" at IU Bloomington. The course draws students from all disciplines with varying levels of understanding of scientific approaches and often limited familiarity with climate issues. Its discussion sessions seek to foster scientific literacy via an alternating series of assignments focused on a combination of exercises that involve either examination and interpretation of on-line climate data or consideration and assessment of the scientific basis of new discoveries about climate change contained in recently published media articles. The final assignment linked to the discussion sessions requires students to review and summarize the topics discussed during the semester. Their answers provide direct evidence of newly acquired abilities to assimilate and evaluate scientific information on a range of topics related to climate change. In addition, student responses to an end-of-semester survey confirm that the vast majority considers that their knowledge and understanding of climate change was enhanced, and unsolicited comments note that the discussion sessions contributed greatly to this advancement. Many students remarked that the course's emphasis on examination of paleoclimate records helped their comprehension of the unprecedented nature of present-day climate trends. Others reported that their views on the significance of climate change had been transformed, and some commented that they now felt well equipped to engage in discussions about climate change because they were better informed about its scientific basis and facts.

  16. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    OpenAIRE

    Xing Cai; Hans Petter Langtangen; Halvard Moe

    2005-01-01

    This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-r...

  17. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    Science.gov (United States)

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  18. SciDAC Advances and Applications in Computational Beam Dynamics

    OpenAIRE

    Ryne, R.; Abell, D.; Adelmann, A.; Amundson, J; Bohn, C; Cary, J; Colella, P.; Dechow, D.; Decyk, V.; Dragt, A.; Gerber, R.; Habib, S.; Higdon, D.; Katsouleas, T.; Ma, K.-L.

    2005-01-01

    SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators -- which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook -- are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are with...

  19. Innovations and Advances in Computer, Information, Systems Sciences, and Engineering

    CERN Document Server

    Sobh, Tarek

    2013-01-01

    Innovations and Advances in Computer, Information, Systems Sciences, and Engineering includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2011). The contents of this book are a set of rigorously reviewed, world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology and Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.

  20. Advances in computers dependable and secure systems engineering

    CERN Document Server

    Hurson, Ali

    2012-01-01

    Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with m

  1. Advanced computational tools for 3-D seismic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Glover, C.W.; Protopopescu, V.A. [Oak Ridge National Lab., TN (United States)] [and others

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  2. Scholarly literature and the press: scientific impact and social perception of physics computing

    International Nuclear Information System (INIS)

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the relationship between the scientific impact and the social perception of HEP physics research versus that of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing via press releases from the major HEP laboratories would be beneficial to the high energy physics community.

  3. 2014 National Workshop on Advances in Communication and Computing

    CERN Document Server

    Prasanna, S; Sarma, Kandarpa; Saikia, Navajit

    2015-01-01

    The present volume is a compilation of research work in computation, communication, vision sciences, device design, fabrication, upcoming materials and related process design, etc. It is derived out of selected manuscripts submitted to the 2014 National Workshop on Advances in Communication and Computing (WACC 2014), Assam Engineering College, Guwahati, Assam, India which is emerging out to be a premier platform for discussion and dissemination of knowhow in this part of the world. The papers included in the volume are indicative of the recent thrust in computation, communications and emerging technologies. Certain recent advances in ZnO nanostructures for alternate energy generation provide emerging insights into an area that has promises for the energy sector including conservation and green technology. Similarly, scholarly contributions have focused on malware detection and related issues. Several contributions have focused on biomedical aspects including contributions related to cancer detection using act...

  4. Data-driven modeling & scientific computation methods for complex systems & big data

    CERN Document Server

    Kutz, J Nathan

    2013-01-01

    The burgeoning field of data analysis is expanding at an incredible pace due to the proliferation of data collection in almost every area of science. The enormous data sets now routinely encountered in the sciences provide an incentive to develop mathematical techniques and computational algorithms that help synthesize, interpret and give meaning to the data in the context of its scientific setting. A specific aim of this book is to integrate standard scientific computing methodswith data analysis. By doing so, it brings together, in a self-consistent fashion, the key ideas from:· statistics,·

  5. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  6. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  7. Lost in Translation: The Gap in Scientific Advancements and Clinical Application.

    Science.gov (United States)

    Fernandez-Moure, Joseph S

    2016-01-01

    The evolution of medicine and medical technology hinges on the successful translation of basic science research from the bench to clinical implementation at the bedside. Out of the increasing need to facilitate the transfer of scientific knowledge to patients, translational research has emerged. Significant leaps in improving global health, such as antibiotics, vaccinations, and cancer therapies, have all seen successes under this paradigm, yet today, it has become increasingly difficult to realize this ideal scenario. As hospital revenue demand increases, and financial support declines, clinician-protected research time has been limited. Researchers, likewise, have been forced to abandon time- and resource-consuming translational research to focus on publication-generating work to maintain funding and professional advancement. Compared to the surge in scientific innovation and new fields of science, realization of transformational scientific findings in device development and materials sciences has significantly lagged behind. Herein, we describe: how the current scientific paradigm struggles in the new health-care landscape; the obstacles met by translational researchers; and solutions, both public and private, to overcoming those obstacles. We must rethink the old dogma of academia and reinvent the traditional pathways of research in order to truly impact the health-care arena and ultimately those that matter most: the patient. PMID:27376058

  8. Lost in Translation: The Gap in Scientific Advancements and Clinical Application

    Directory of Open Access Journals (Sweden)

    Joseph eFernandez-Moure

    2016-06-01

    Full Text Available The evolution of medicine and medical technology hinges on the successful translation of basic science research from the bench to clinical implementation at the bedside. Born out of the increasing need to facilitate the transfer of scientific knowledge to patients, translational research has emerged. Significant leaps in improving global health such as antibiotics, vaccinations, and cancer therapies have all seen successes under this paradigm yet today it has become increasingly difficult to realize this ideal scenario. As hospital revenue demand increase, and financial support declines, clinician protected research time has been limited. Researchers, likewise, have been forced to abandon time and resource consuming translational research to focus on publication generating work to maintain funding and professional advancement. Compared to the surge in scientific innovation and new fields of science have surged, realization of transformational scientific findings in device development and materials sciences has significantly lagged behind. Herein, we describe: how the current scientific paradigm struggles in the new health-care landscape; the obstacles met by translational researchers; and solutions, both public and private, to overcoming those obstacles. We must rethink the old dogma of academia and reinvent the traditional pathways of research in order to truly impact the health-care arena and ultimately those that matter most: the patient.

  9. An expanded framework for the advanced computational testing and simulation toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Marques, Osni A.; Drummond, Leroy A.

    2003-11-09

    The Advanced Computational Testing and Simulation (ACTS) Toolkit is a set of computational tools developed primarily at DOE laboratories and is aimed at simplifying the solution of common and important computational problems. The use of the tools reduces the development time for new codes and the tools provide functionality that might not otherwise be available. This document outlines an agenda for expanding the scope of the ACTS Project based on lessons learned from current activities. Highlights of this agenda include peer-reviewed certification of new tools; finding tools to solve problems that are not currently addressed by the Toolkit; working in collaboration with other software initiatives and DOE computer facilities; expanding outreach efforts; promoting interoperability, further development of the tools; and improving functionality of the ACTS Information Center, among other tasks. The ultimate goal is to make the ACTS tools more widely used and more effective in solving DOE's and the nation's scientific problems through the creation of a reliable software infrastructure for scientific computing.

  10. National Resource for Computation in Chemistry (NRCC). Attached scientific processors for chemical computations: a report to the chemistry community

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, N.S.

    1980-01-01

    The demands of chemists for computational resources are well known and have been amply documented. The best and most cost-effective means of providing these resources is still open to discussion, however. This report surveys the field of attached scientific processors (array processors) and attempts to indicate their present and possible future use in computational chemistry. Array processors have the possibility of providing very cost-effective computation. This report attempts to provide information that will assist chemists who might be considering the use of an array processor for their computations. It describes the general ideas and concepts involved in using array processors, the commercial products that are available, and the experiences reported by those currently using them. In surveying the field of array processors, the author makes certain recommendations regarding their use in computational chemistry. 5 figures, 1 table (RWR)

  11. Advanced computer graphics techniques as applied to the nuclear industry

    International Nuclear Information System (INIS)

    Computer graphics is a rapidly advancing technological area in computer science. This is being motivated by increased hardware capability coupled with reduced hardware costs. This paper will cover six topics in computer graphics, with examples forecasting how each of these capabilities could be used in the nuclear industry. These topics are: (1) Image Realism with Surfaces and Transparency; (2) Computer Graphics Motion; (3) Graphics Resolution Issues and Examples; (4) Iconic Interaction; (5) Graphic Workstations; and (6) Data Fusion - illustrating data coming from numerous sources, for display through high dimensional, greater than 3-D, graphics. All topics will be discussed using extensive examples with slides, video tapes, and movies. Illustrations have been omitted from the paper due to the complexity of color reproduction. 11 refs., 2 figs., 3 tabs

  12. Sudden Cardiac Risk Stratification with Electrocardiographic Indices - A Review on Computational Processing, Technology Transfer, and Scientific Evidence

    Science.gov (United States)

    Gimeno-Blanes, Francisco J.; Blanco-Velasco, Manuel; Barquero-Pérez, Óscar; García-Alberola, Arcadi; Rojo-Álvarez, José L.

    2016-01-01

    Great effort has been devoted in recent years to the development of sudden cardiac risk predictors as a function of electric cardiac signals, mainly obtained from the electrocardiogram (ECG) analysis. But these prediction techniques are still seldom used in clinical practice, partly due to its limited diagnostic accuracy and to the lack of consensus about the appropriate computational signal processing implementation. This paper addresses a three-fold approach, based on ECG indices, to structure this review on sudden cardiac risk stratification. First, throughout the computational techniques that had been widely proposed for obtaining these indices in technical literature. Second, over the scientific evidence, that although is supported by observational clinical studies, they are not always representative enough. And third, via the limited technology transfer of academy-accepted algorithms, requiring further meditation for future systems. We focus on three families of ECG derived indices which are tackled from the aforementioned viewpoints, namely, heart rate turbulence (HRT), heart rate variability (HRV), and T-wave alternans. In terms of computational algorithms, we still need clearer scientific evidence, standardizing, and benchmarking, siting on advanced algorithms applied over large and representative datasets. New scenarios like electronic health recordings, big data, long-term monitoring, and cloud databases, will eventually open new frameworks to foresee suitable new paradigms in the near future. PMID:27014083

  13. Sudden Cardiac Risk Stratification with Electrocardiographic Indices - A Review on Computational Processing, Technology Transfer, and Scientific Evidence

    Directory of Open Access Journals (Sweden)

    Francisco Javier eGimeno-Blanes

    2016-03-01

    Full Text Available Great effort has been devoted in recent years to the development of sudden cardiac risk predictors as a function of electric cardiac signals, mainly obtained from the electrocardiogram (ECG analysis. But these prediction techniques are still seldom used in clinical practice, partly due to its limited diagnostic accuracy and to the lack of consensus about the appropriate computational signal processing implementation. This paper addresses a three-fold approach, based on ECG indexes, to structure this review on sudden cardiac risk stratification. First, throughout the computational techniques that had been widely proposed for obtaining these indexes in technical literature. Second, over the scientific evidence, that although is supported by observational clinical studies, they are not always representative enough. And third, via the limited technology transfer of academy-accepted algorithms, requiring further meditation for future systems. We focus on three families of ECG derived indexes which are tackled from the aforementioned viewpoints, namely, heart rate turbulence, heart rate variability, and T-wave alternans. In terms of computational algorithms, we still need clearer scientific evidence, standardizing, and benchmarking, siting on advanced algorithms applied over large and representative datasets. New scenarios like electronic health recordings, big data, long-term monitoring, and cloud databases, will eventually open new frameworks to foresee suitable new paradigms in the near future.

  14. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L. [Univ. of Washington, Seattle, WA (United States). Dept. of Computer Science and Engineering

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and execute program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.

  15. Institute for Scientific Computing Research Annual Report for Fiscal Year 2003

    Energy Technology Data Exchange (ETDEWEB)

    Keyes, D; McGraw, J

    2004-02-12

    The University Relations Program (URP) encourages collaborative research between Lawrence Livermore National Laboratory (LLNL) and the University of California campuses. The Institute for Scientific Computing Research (ISCR) actively participates in such collaborative research, and this report details the Fiscal Year 2003 projects jointly served by URP and ISCR.

  16. Advanced computer architecture specification for automated weld systems

    Science.gov (United States)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  17. Soft computing in design and manufacturing of advanced materials

    Science.gov (United States)

    Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex

    1993-01-01

    The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

  18. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  19. Condition Monitoring Through Advanced Sensor and Computational Technology

    International Nuclear Information System (INIS)

    The overall goal of this joint research project was to develop and demonstrate advanced sensors and computational technology for continuous monitoring of the condition of components, structures, and systems in advanced and next-generation nuclear power plants (NPPs). This project included investigating and adapting several advanced sensor technologies from Korean and US national laboratory research communities, some of which were developed and applied in non-nuclear industries. The project team investigated and developed sophisticated signal processing, noise reduction, and pattern recognition techniques and algorithms. The researchers installed sensors and conducted condition monitoring tests on two test loops, a check valve (an active component) and a piping elbow (a passive component), to demonstrate the feasibility of using advanced sensors and computational technology to achieve the project goal. Acoustic emission (AE) devices, optical fiber sensors, accelerometers, and ultrasonic transducers (UTs) were used to detect mechanical vibratory response of check valve and piping elbow in normal and degraded configurations. Chemical sensors were also installed to monitor the water chemistry in the piping elbow test loop. Analysis results of processed sensor data indicate that it is feasible to differentiate between the normal and degraded (with selected degradation mechanisms) configurations of these two components from the acquired sensor signals, but it is questionable that these methods can reliably identify the level and type of degradation. Additional research and development efforts are needed to refine the differentiation techniques and to reduce the level of uncertainties

  20. Scientific Grand Challenges: Discovery In Basic Energy Sciences: The Role of Computing at the Extreme Scale - August 13-15, 2009, Washington, D.C.

    Energy Technology Data Exchange (ETDEWEB)

    Galli, Giulia [Univ. of California, Davis, CA (United States). Workshop Chair; Dunning, Thom [Univ. of Illinois, Urbana, IL (United States). Workshop Chair

    2009-08-13

    The U.S. Department of Energy’s (DOE) Office of Basic Energy Sciences (BES) and Office of Advanced Scientific Computing Research (ASCR) workshop in August 2009 on extreme-scale computing provided a forum for more than 130 researchers to explore the needs and opportunities that will arise due to expected dramatic advances in computing power over the next decade. This scientific community firmly believes that the development of advanced theoretical tools within chemistry, physics, and materials science—combined with the development of efficient computational techniques and algorithms—has the potential to revolutionize the discovery process for materials and molecules with desirable properties. Doing so is necessary to meet the energy and environmental challenges of the 21st century as described in various DOE BES Basic Research Needs reports. Furthermore, computational modeling and simulation are a crucial complement to experimental studies, particularly when quantum mechanical processes controlling energy production, transformations, and storage are not directly observable and/or controllable. Many processes related to the Earth’s climate and subsurface need better modeling capabilities at the molecular level, which will be enabled by extreme-scale computing.

  1. Janus: an FPGA-based system for high-performance scientific computing

    OpenAIRE

    Fernández Pérez, Luis Antonio; Martín Mayor, Víctor; Muñoz Sudupe, Antonio; Yllanes, D.; otros, ...

    2009-01-01

    This paper describes JANUS, a modular massively parallel and reconfigurable FPGA-based computing system. Each JANUS module has a computational core and a host. The computational core is a 4x4 array of FPGA-based processing elements with nearest-neighbor data links. Processors are also directly connected to an I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for, but not limited to, the requirements of a class of hard scientific applications characterized by regular co...

  2. Evaluation of cache-based superscalar and cacheless vector architectures for scientific computations

    Energy Technology Data Exchange (ETDEWEB)

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; Van der Wijngaart, Rob

    2003-05-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high end computing. The recent development of parallel vector systems offers the potential to bridge this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of scientific computing areas. First, we present the performance of a microbenchmark suite that examines low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Results demonstrate that the SX-6 achieves high performance on a large fraction of our applications and often significantly out performs the cache-based architectures. However, certain applications are not easily amenable to vectorization and would re quire extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  3. Rapid Scientific Response as an Educational Opportunity Integrating Geoscience and Advanced Visualization

    Science.gov (United States)

    Oskin, M. E.; Kellogg, L. H.; Team, K.

    2014-12-01

    Natural disasters provide important opportunities to conduct original scientific research. We present the results of a graduate course at the University of California, Davis centered on rapid scientific response to the 24 August magnitude 6.0 South Napa earthquake. Students from both geoscience and computer visualization formed collaborative teams to conduct original research, choosing from diverse research topics including mapping of the surface rupture, both in the field and remotely, production and analysis of three-dimensional scans of offset features, topographic point-cloud differencing, identification and mapping of pre-historic earthquake scarps, analysis of geodetic data for pre-earthquake fault loading rate and modeling of finite fault offset, aftershock distribution, construction and 3D visualization of earth structure and seismic velocity models, shaking intensity from empirical models, and earthquake rupture simulation.

  4. Advanced Tele-operation[1997 Scientific Report of the Belgian Nuclear Research Centre

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M.

    1998-07-01

    Maintenance, repair, and dismantling operations in nuclear facilities have to be performed remotely when high radiation doses exclude hands-on operation, but also to minimize contamination risks and occupational doses to the operators. Computer-aided and sensor-based tele-operation enhances safety, reliability, and performance by helping the operator in difficult tasks with poor remote environmental perception. The objectives of work in this domain are to increase the scientific knowledge of the studied phenomena, to improve the interpretation of data, to improve the piloting og experimental devices during irradiation, to reveal and to understand possible unexpected phenomena occurring during irradiation. This scientific report describes the achievements for 1997 in the area of radiation tolerance for of remote-sensing, optical fibres and optical fibre sensors.

  5. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    Energy Technology Data Exchange (ETDEWEB)

    Mhashilkar, Parag [Fermilab; Tiradani, Anthony [Fermilab; Holzman, Burt [Fermilab; Larson, Krista [Fermilab; Sfiligoi, Igor [UC, San Diego; Rynge, Mats [USC - ISI, Marina del Rey

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  6. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    Science.gov (United States)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-06-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  7. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    International Nuclear Information System (INIS)

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  8. Computational experiment approach to advanced secondary mathematics curriculum

    CERN Document Server

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  9. Information and computation essays on scientific and philosophical understanding of foundations of information and computation

    CERN Document Server

    Dodig-Crnkovic, Gordana

    2011-01-01

    Information is a basic structure of the world, while computation is a process of the dynamic change of information. This book provides a cutting-edge view of world's leading authorities in fields where information and computation play a central role. It sketches the contours of the future landscape for the development of our understanding of information and computation, their mutual relationship and the role in cognition, informatics, biology, artificial intelligence, and information technology. This book is an utterly enjoyable and engaging read which gives readers an opportunity to understan

  10. Space and Earth Sciences, Computer Systems, and Scientific Data Analysis Support, Volume 1

    Science.gov (United States)

    Estes, Ronald H. (Editor)

    1993-01-01

    This Final Progress Report covers the specific technical activities of Hughes STX Corporation for the last contract triannual period of 1 June through 30 Sep. 1993, in support of assigned task activities at Goddard Space Flight Center (GSFC). It also provides a brief summary of work throughout the contract period of performance on each active task. Technical activity is presented in Volume 1, while financial and level-of-effort data is presented in Volume 2. Technical support was provided to all Division and Laboratories of Goddard's Space Sciences and Earth Sciences Directorates. Types of support include: scientific programming, systems programming, computer management, mission planning, scientific investigation, data analysis, data processing, data base creation and maintenance, instrumentation development, and management services. Mission and instruments supported include: ROSAT, Astro-D, BBXRT, XTE, AXAF, GRO, COBE, WIND, UIT, SMM, STIS, HEIDI, DE, URAP, CRRES, Voyagers, ISEE, San Marco, LAGEOS, TOPEX/Poseidon, Pioneer-Venus, Galileo, Cassini, Nimbus-7/TOMS, Meteor-3/TOMS, FIFE, BOREAS, TRMM, AVHRR, and Landsat. Accomplishments include: development of computing programs for mission science and data analysis, supercomputer applications support, computer network support, computational upgrades for data archival and analysis centers, end-to-end management for mission data flow, scientific modeling and results in the fields of space and Earth physics, planning and design of GSFC VO DAAC and VO IMS, fabrication, assembly, and testing of mission instrumentation, and design of mission operations center.

  11. Advances in Intelligent Control Systems and Computer Science

    CERN Document Server

    2013-01-01

    The conception of real-time control networks taking into account, as an integrating approach, both the specific aspects of information and knowledge processing and the dynamic and energetic particularities of physical processes and of communication networks is representing one of the newest scientific and technological challenges. The new paradigm of Cyber-Physical Systems (CPS) reflects this tendency and will certainly change the evolution of the technology, with major social and economic impact. This book presents significant results in the field of process control and advanced information and knowledge processing, with applications in the fields of robotics, biotechnology, environment, energy, transportation, et al.. It introduces intelligent control concepts and strategies as well as real-time implementation aspects for complex control approaches. One of the sections is dedicated to the complex problem of designing software systems for distributed information processing networks. Problems as complexity an...

  12. MiniGhost : a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing.

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Richard Frederick; Heroux, Michael Allen; Vaughan, Courtenay Thomas

    2012-04-01

    A broad range of scientific computation involves the use of difference stencils. In a parallel computing environment, this computation is typically implemented by decomposing the spacial domain, inducing a 'halo exchange' of process-owned boundary data. This approach adheres to the Bulk Synchronous Parallel (BSP) model. Because commonly available architectures provide strong inter-node bandwidth relative to latency costs, many codes 'bulk up' these messages by aggregating data into a message as a means of reducing the number of messages. A renewed focus on non-traditional architectures and architecture features provides new opportunities for exploring alternatives to this programming approach. In this report we describe miniGhost, a 'miniapp' designed for exploration of the capabilities of current as well as emerging and future architectures within the context of these sorts of applications. MiniGhost joins the suite of miniapps developed as part of the Mantevo project.

  13. Deadline aware virtual machine scheduler for scientific grids and cloud computing

    OpenAIRE

    Khalid, Omer; Maljevic, Ivo; Anthony, Richard; Petridis, Miltos; Parrot, Kevin; Schulz, Markus

    2010-01-01

    Virtualization technology has enabled applications to be decoupled from the underlying hardware providing the benefits of portability, better control over execution environment and isolation. It has been widely adopted in scientific grids and commercial clouds. Since virtualization, despite its benefits incurs a performance penalty, which could be significant for systems dealing with uncertainty such as High Performance Computing (HPC) applications where jobs have tight deadlines and have dep...

  14. Creatures of Habit: A Computational System to Enhance and Illuminate the Development of Scientific Thinking

    OpenAIRE

    Pea, Roy D.; Eisenberg, Michael; Turbak, Franklyn

    1988-01-01

    Creatures of Habit is a computer-based microworld designed to engage middle-to-high school students in the process of scientific inquiry. The system depicts a universe of interacting programmable "creatures" whose individual behavior is guided by simple rules that may model naive psychology, physical laws, chemical affinities, and other domains. Students can create or revise creature rules and explore the resulting (and often surprising) emergent behaviors within "artificial ecosystems"; or t...

  15. Scalability of a Base Level Design for an On-Board-Computer for Scientific Missions

    Science.gov (United States)

    Treudler, Carl Johann; Schroder, Jan-Carsten; Greif, Fabian; Stohlmann, Kai; Aydos, Gokce; Fey, Gorschwin

    2014-08-01

    Facing a wide range of mission requirements and the integration of diverse payloads requires extreme flexibility in the on-board-computing infrastructure for scientific missions. We show that scalability is principally difficult. We address this issue by proposing a base level design and show how the adoption to different needs is achieved. Inter-dependencies between scaling different aspects and their impact on different levels in the design are discussed.

  16. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    Science.gov (United States)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  17. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    Science.gov (United States)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  18. Advanced computational modelling for drying processes – A review

    International Nuclear Information System (INIS)

    Highlights: • Understanding the product dehydration process is a key aspect in drying technology. • Advanced modelling thereof plays an increasingly important role for developing next-generation drying technology. • Dehydration modelling should be more energy-oriented. • An integrated “nexus” modelling approach is needed to produce more energy-smart products. • Multi-objective process optimisation requires development of more complete multiphysics models. - Abstract: Drying is one of the most complex and energy-consuming chemical unit operations. R and D efforts in drying technology have skyrocketed in the past decades, as new drivers emerged in this industry next to procuring prime product quality and high throughput, namely reduction of energy consumption and carbon footprint as well as improving food safety and security. Solutions are sought in optimising existing technologies or developing new ones which increase energy and resource efficiency, use renewable energy, recuperate waste heat and reduce product loss, thus also the embodied energy therein. Novel tools are required to push such technological innovations and their subsequent implementation. Particularly computer-aided drying process engineering has a large potential to develop next-generation drying technology, including more energy-smart and environmentally-friendly products and dryers systems. This review paper deals with rapidly emerging advanced computational methods for modelling dehydration of porous materials, particularly for foods. Drying is approached as a combined multiphysics, multiscale and multiphase problem. These advanced methods include computational fluid dynamics, several multiphysics modelling methods (e.g. conjugate modelling), multiscale modelling and modelling of material properties and the associated propagation of material property variability. Apart from the current challenges for each of these, future perspectives should be directed towards material property

  19. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  20. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  1. Advanced intelligent computational technologies and decision support systems

    CERN Document Server

    Kountchev, Roumen

    2014-01-01

    This book offers a state of the art collection covering themes related to Advanced Intelligent Computational Technologies and Decision Support Systems which can be applied to fields like healthcare assisting the humans in solving problems. The book brings forward a wealth of ideas, algorithms and case studies in themes like: intelligent predictive diagnosis; intelligent analyzing of medical images; new format for coding of single and sequences of medical images; Medical Decision Support Systems; diagnosis of Down’s syndrome; computational perspectives for electronic fetal monitoring; efficient compression of CT Images; adaptive interpolation and halftoning for medical images; applications of artificial neural networks for real-life problems solving; present and perspectives for Electronic Healthcare Record Systems; adaptive approaches for noise reduction in sequences of CT images etc.

  2. Operational Philosophy for the Advanced Test Reactor National Scientific User Facility

    Energy Technology Data Exchange (ETDEWEB)

    J. Benson; J. Cole; J. Jackson; F. Marshall; D. Ogden; J. Rempe; M. C. Thelen

    2013-02-01

    In 2007, the Department of Energy (DOE) designated the Advanced Test Reactor (ATR) as a National Scientific User Facility (NSUF). At its core, the ATR NSUF Program combines access to a portion of the available ATR radiation capability, the associated required examination and analysis facilities at the Idaho National Laboratory (INL), and INL staff expertise with novel ideas provided by external contributors (universities, laboratories, and industry). These collaborations define the cutting edge of nuclear technology research in high-temperature and radiation environments, contribute to improved industry performance of current and future light-water reactors (LWRs), and stimulate cooperative research between user groups conducting basic and applied research. To make possible the broadest access to key national capability, the ATR NSUF formed a partnership program that also makes available access to critical facilities outside of the INL. Finally, the ATR NSUF has established a sample library that allows access to pre-irradiated samples as needed by national research teams.

  3. The real-time learning mechanism of the Scientific Research Associates Advanced Robotic System (SRAARS)

    Science.gov (United States)

    Chen, Alexander Y.

    1990-01-01

    Scientific research associates advanced robotic system (SRAARS) is an intelligent robotic system which has autonomous learning capability in geometric reasoning. The system is equipped with one global intelligence center (GIC) and eight local intelligence centers (LICs). It controls mainly sixteen links with fourteen active joints, which constitute two articulated arms, an extensible lower body, a vision system with two CCD cameras and a mobile base. The on-board knowledge-based system supports the learning controller with model representations of both the robot and the working environment. By consecutive verifying and planning procedures, hypothesis-and-test routines and learning-by-analogy paradigm, the system would autonomously build up its own understanding of the relationship between itself (i.e., the robot) and the focused environment for the purposes of collision avoidance, motion analysis and object manipulation. The intelligence of SRAARS presents a valuable technical advantage to implement robotic systems for space exploration and space station operations.

  4. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  5. The ACP [Advanced Computer Program] multiprocessor system at Fermilab

    International Nuclear Information System (INIS)

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere

  6. Socioscientific Issues: A Path Towards Advanced Scientific Literacy and Improved Conceptual Understanding of Socially Controversial Scientific Theories

    Science.gov (United States)

    Pinzino, Dean William

    This thesis investigates the use of socioscientific issues (SSI) in the high school science classroom as an introduction to argumentation and socioscientific reasoning, with the goal of improving students' scientific literacy (SL). Current research is reviewed that supports the likelihood of students developing a greater conceptual understanding of scientific theories as well as a deeper understanding of the nature of science (NOS), through participation in informal and formal forms of argumentation in the context of SSI. Significant gains in such understanding may improve a student's ability to recognize the rigor, legitimacy, and veracity of scientific claims and better discern science from pseudoscience. Furthermore, students that participate in significant SSI instruction by negotiating a range of science-related social issues can make significant gains in content knowledge and develop the life-long skills of argumentation and evidence-based reasoning, goals not possible in traditional lecture-based science instruction. SSI-based instruction may therefore help students become responsible citizens. This synthesis also suggests that that the improvements in science literacy and NOS understanding that develop from sustained engagement in SSI-based instruction will better prepare students to examine and scrutinize socially controversial scientific theories (i.e., evolution, global warming, and the Big Bang).

  7. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  8. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  9. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D.; Sewell, Christopher (LANL); Childs, Hank (U of Oregon); Ma, Kwan-Liu (UC Davis); Geveci, Berk (Kitware); Meredith, Jeremy (ORNL)

    2016-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  10. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sewell, Christopher [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Meredith, Jeremy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  11. International conference on Advances in Intelligent Control and Innovative Computing

    CERN Document Server

    Castillo, Oscar; Huang, Xu; Intelligent Control and Innovative Computing

    2012-01-01

    In the lightning-fast world of intelligent control and cutting-edge computing, it is vitally important to stay abreast of developments that seem to follow each other without pause. This publication features the very latest and some of the very best current research in the field, with 32 revised and extended research articles written by prominent researchers in the field. Culled from contributions to the key 2011 conference Advances in Intelligent Control and Innovative Computing, held in Hong Kong, the articles deal with a wealth of relevant topics, from the most recent work in artificial intelligence and decision-supporting systems, to automated planning, modelling and simulation, signal processing, and industrial applications. Not only does this work communicate the current state of the art in intelligent control and innovative computing, it is also an illuminating guide to up-to-date topics for researchers and graduate students in the field. The quality of the contents is absolutely assured by the high pro...

  12. Scientific Reasoning and Argumentation: Advancing an Interdisciplinary Research Agenda in Education

    Science.gov (United States)

    Fischer, Frank; Kollar, Ingo; Ufer, Stefan; Sodian, Beate; Hussmann, Heinrich; Pekrun, Reinhard; Neuhaus, Birgit; Dorner, Birgit; Pankofer, Sabine; Fischer, Martin; Strijbos, Jan-Willem; Heene, Moritz; Eberle, Julia

    2014-01-01

    Scientific reasoning and scientific argumentation are highly valued outcomes of K-12 and higher education. In this article, we first review main topics and key findings of three different strands of research, namely research on the development of scientific reasoning, research on scientific argumentation, and research on approaches to support…

  13. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    Directory of Open Access Journals (Sweden)

    Xing Cai

    2005-01-01

    Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.

  14. Smart Libraries: Best SQE Practices for Libraries with an Emphasis on Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M C; Reus, J F; Matzke, R P; Koziol, Q A; Cheng, A P

    2004-12-15

    As scientific computing applications grow in complexity, more and more functionality is being packaged in independently developed libraries. Worse, as the computing environments in which these applications run grow in complexity, it gets easier to make mistakes in building, installing and using libraries as well as the applications that depend on them. Unfortunately, SQA standards so far developed focus primarily on applications, not libraries. We show that SQA standards for libraries differ from applications in many respects. We introduce and describe a variety of practices aimed at minimizing the likelihood of making mistakes in using libraries and at maximizing users' ability to diagnose and correct them when they occur. We introduce the term Smart Library to refer to a library that is developed with these basic principles in mind. We draw upon specific examples from existing products we believe incorporate smart features: MPI, a parallel message passing library, and HDF5 and SAF, both of which are parallel I/O libraries supporting scientific computing applications. We conclude with a narrative of some real-world experiences in using smart libraries with Ale3d, VisIt and SAF.

  15. Advanced information processing system: Inter-computer communication services

    Science.gov (United States)

    Burkhardt, Laura; Masotto, Tom; Sims, J. Terry; Whittredge, Roy; Alger, Linda S.

    1991-01-01

    The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.

  16. Computational modeling, optimization and manufacturing simulation of advanced engineering materials

    CERN Document Server

    2016-01-01

    This volume presents recent research work focused in the development of adequate theoretical and numerical formulations to describe the behavior of advanced engineering materials.  Particular emphasis is devoted to applications in the fields of biological tissues, phase changing and porous materials, polymers and to micro/nano scale modeling. Sensitivity analysis, gradient and non-gradient based optimization procedures are involved in many of the chapters, aiming at the solution of constitutive inverse problems and parameter identification. All these relevant topics are exposed by experienced international and inter institutional research teams resulting in a high level compilation. The book is a valuable research reference for scientists, senior undergraduate and graduate students, as well as for engineers acting in the area of computational material modeling.

  17. Advances in Computational Techniques to Study GPCR-Ligand Recognition.

    Science.gov (United States)

    Ciancetta, Antonella; Sabbadin, Davide; Federico, Stephanie; Spalluto, Giampiero; Moro, Stefano

    2015-12-01

    G-protein-coupled receptors (GPCRs) are among the most intensely investigated drug targets. The recent revolutions in protein engineering and molecular modeling algorithms have overturned the research paradigm in the GPCR field. While the numerous ligand-bound X-ray structures determined have provided invaluable insights into GPCR structure and function, the development of algorithms exploiting graphics processing units (GPUs) has made the simulation of GPCRs in explicit lipid-water environments feasible within reasonable computation times. In this review we present a survey of the recent advances in structure-based drug design approaches with a particular emphasis on the elucidation of the ligand recognition process in class A GPCRs by means of membrane molecular dynamics (MD) simulations. PMID:26538318

  18. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  19. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    Carnes, B

    2009-06-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  20. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  1. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  2. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  3. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  4. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  5. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  6. Reliability of an interactive computer program for advance care planning.

    Science.gov (United States)

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  7. Reliability of an Interactive Computer Program for Advance Care Planning

    Science.gov (United States)

    Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-01-01

    Abstract Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83–0.95, and 0.86–0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  8. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  9. Advanced Test Reactor National Scientific User Facility (ATR NSUF) Monthly Report October 2014

    Energy Technology Data Exchange (ETDEWEB)

    Dan Ogden

    2014-10-01

    Advanced Test Reactor National Scientific User Facility (ATR NSUF) Monthly Report October 2014 Highlights • Rory Kennedy, Dan Ogden and Brenden Heidrich traveled to Germantown October 6-7, for a review of the Infrastructure Management mission with Shane Johnson, Mike Worley, Bradley Williams and Alison Hahn from NE-4 and Mary McCune from NE-3. Heidrich briefed the group on the project progress from July to October 2014 as well as the planned path forward for FY15. • Jim Cole gave two invited university seminars at Ohio State University and University of Florida, providing an overview of NSUF including available capabilities and the process for accessing facilities through the peer reviewed proposal process. • Jim Cole and Rory Kennedy co-chaired the NuMat meeting with Todd Allen. The meeting, sponsored by Elsevier publishing, was held in Clearwater, Florida, and is considered one of the premier nuclear fuels and materials conferences. Over 340 delegates attended with 160 oral and over 200 posters presented over 4 days. • Thirty-one pre-applications were submitted for NSUF access through the NE-4 Combined Innovative Nuclear Research Funding Opportunity Announcement. • Fourteen proposals were received for the NSUF Rapid Turnaround Experiment Summer 2014 call. Proposal evaluations are underway. • John Jackson and Rory Kennedy attended the Nuclear Fuels Industry Research meeting. Jackson presented an overview of ongoing NSUF industry research.

  10. The Ultrahigh Resolution IXS Beamline of NSLS-II: Recent Advances and Scientific Opportunities

    International Nuclear Information System (INIS)

    The ultrahigh resolution IXS beamline of NSLS-II is designed to probe a region of dynamic response that requires an ultrahigh energy and momentum resolution of up to 0.1 meV and −1 respectively, which is currently still beyond the reach of existing low and high frequency inelastic scattering probes. Recent advances at NSLS-II in developing the required x-ray optics and instrumentation based on the use of extremely asymmetric Bragg back reflections of Si have allowed us to achieve sub-meV energy resolution with sharp tails and high efficiency at a medium energy of around 9.1 keV, thereby validating the optical design of the beamline for the baseline scope and paving the way for further development towards the ultimate goal of 0.1 meV. The IXS beamline is expected to provide a broad range of scientific opportunities, particularly in areas of liquid, disordered and bio-molecular systems.

  11. Advances in computer technology: impact on the practice of medicine.

    Science.gov (United States)

    Groth-Vasselli, B; Singh, K; Farnsworth, P N

    1995-01-01

    Advances in computer technology provide a wide range of applications which are revolutionizing the practice of medicine. The development of new software for the office creates a web of communication among physicians, staff members, health care facilities and associated agencies. This provides the physician with the prospect of a paperless office. At the other end of the spectrum, the development of 3D work stations and software based on computational chemistry permits visualization of protein molecules involved in disease. Computer assisted molecular modeling has been used to construct working 3D models of lens alpha-crystallin. The 3D structure of alpha-crystallin is basic to our understanding of the molecular mechanisms involved in lens fiber cell maturation, stabilization of the inner nuclear region, the maintenance of lens transparency and cataractogenesis. The major component of the high molecular weight aggregates that occur during cataractogenesis is alpha-crystallin subunits. Subunits of alpha-crystallin occur in other tissues of the body. In the central nervous system accumulation of these subunits in the form of dense inclusion bodies occurs in pathological conditions such as Alzheimer's disease, Huntington's disease, multiple sclerosis and toxoplasmosis (Iwaki, Wisniewski et al., 1992), as well as neoplasms of astrocyte origin (Iwaki, Iwaki, et al., 1991). Also cardiac ischemia is associated with an increased alpha B synthesis (Chiesi, Longoni et al., 1990). On a more global level, the molecular structure of alpha-crystallin may provide information pertaining to the function of small heat shock proteins, hsp, in maintaining cell stability under the stress of disease. PMID:8721907

  12. Validation of physics and thermalhydraulic computer codes for advanced Candu reactor applications

    International Nuclear Information System (INIS)

    Atomic Energy of Canada Ltd. (AECL) is developing an Advanced Candu Reactor (ACR) that is an evolutionary advancement of the currently operating Candu 6 reactors. The ACR is being designed to produce electrical power for a capital cost and at a unit-energy cost significantly less than that of the current reactor designs. The ACR retains the modular Candu concept of horizontal fuel channels surrounded by a heavy water moderator. However, ACR uses slightly enriched uranium fuel compared to the natural uranium used in Candu 6. This achieves the twin goals of improved economics (via large reductions in the heavy water moderator volume and replacement of the heavy water coolant with light water coolant) and improved safety. AECL has developed and implemented a software quality assurance program to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. Since the basic design of the ACR is equivalent to that of the Candu 6, most of the key phenomena associated with the safety analyses of ACR are common, and the Candu industry standard tool-set of safety analysis codes can be applied to the analysis of the ACR. A systematic assessment of computer code applicability addressing the unique features of the ACR design was performed covering the important aspects of the computer code structure, models, constitutive correlations, and validation database. Arising from this assessment, limited additional requirements for code modifications and extensions to the validation databases have been identified. This paper provides an outline of the AECL software quality assurance program process for the validation of computer codes used to perform physics and thermal-hydraulics safety analyses of the ACR. It describes the additional validation work that has been identified for these codes and the planned, and ongoing, experimental programs to extend the code validation as required to address specific ACR design

  13. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  14. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  15. Availability measurement of grid services from the perspective of a scientific computing centre

    Science.gov (United States)

    Marten, H.; Koenig, T.

    2011-12-01

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard "IT Infrastructure Library (ITIL)" [1] was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management [1]. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  16. Availability measurement of grid services from the perspective of a scientific computing centre

    International Nuclear Information System (INIS)

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard 'IT Infrastructure Library (ITIL)' was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  17. Advanced Test Reactor National Scientific User Facility (ATR NSUF) Monthly Report November 2014

    Energy Technology Data Exchange (ETDEWEB)

    Soelberg, Renae [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-11-01

    Advanced Test Reactor National Scientific User Facility (ATR NSUF) Monthly Report November 2014 Highlights Rory Kennedy and Sarah Robertson attended the American Nuclear Society Winter Meeting and Nuclear Technology Expo in Anaheim, California, Nov. 10-13. ATR NSUF exhibited at the technology expo where hundreds of meeting participants had an opportunity to learn more about ATR NSUF. Dr. Kennedy briefed the Nuclear Engineering Department Heads Organization (NEDHO) on the workings of the ATR NSUF. • Rory Kennedy, James Cole and Dan Ogden participated in a reactor instrumentation discussion with Jean-Francois Villard and Christopher Destouches of CEA and several members of the INL staff. • ATR NSUF received approval from the NE-20 office to start planning the annual Users Meeting. The meeting will be held at INL, June 22-25. • Mike Worley, director of the Office of Innovative Nuclear Research (NE-42), visited INL Nov. 4-5. Milestones Completed • Recommendations for the Summer Rapid Turnaround Experiment awards were submitted to DOE-HQ Nov. 12 (Level 2 milestone due Nov. 30). Major Accomplishments/Activities • The University of California, Santa Barbara 2 experiment was unloaded from the GE-2000 at HFEF. The experiment specimen packs will be removed and shipped to ORNL for PIE. • The Terrani experiment, one of three FY 2014 new awards, was completed utilizing the Advanced Photon Source MRCAT beamline. The experiment investigated the chemical state of Ag and Pd in SiC shell of irradiated TRISO particles via X-ray Absorption Fine Structure (XAFS) spectroscopy. Upcoming Meetings/Events • The ATR NSUF program review meeting will be held Dec. 9-10 at L’Enfant Plaza. In addition to NSUF staff and users, NE-4, NE-5 and NE-7 representatives will attend the meeting. Awarded Research Projects Boise State University Rapid Turnaround Experiments (14-485 and 14-486) Nanoindentation and TEM work on the T91, HT9, HCM12A and 9Cr ODS specimens has been completed at

  18. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  19. The Effects of Inquiry-Based Computer Simulation with Cooperative Learning on Scientific Thinking and Conceptual Understanding of Gas Laws

    Science.gov (United States)

    Abdullah, Sopiah; Shariff, Adilah

    2008-01-01

    The purpose of the study was to investigate the effects of inquiry-based computer simulation with heterogeneous-ability cooperative learning (HACL) and inquiry-based computer simulation with friendship cooperative learning (FCL) on (a) scientific reasoning (SR) and (b) conceptual understanding (CU) among Form Four students in Malaysian Smart…

  20. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, James H. [University of North Florida; Cox, Philip [University of North Florida; Harrington, William J [University of North Florida; Campbell, Joseph L [University of North Florida

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  1. Development of advanced nodal diffusion methods for modern computer architectures

    International Nuclear Information System (INIS)

    A family of highly efficient multidimensional multigroup advanced neutron-diffusion nodal methods, ILLICO, were implemented on sequential, vector, and vector-concurrent computers. Three-dimensional realistic benchmark problems can be solved in vectorized mode in less than 0.73 s (33.86 Mflops) on a Cray X-MP/48. Vector-concurrent implementations yield speedups as high as 9.19 on an Alliant FX/8. These results show that the ILLICO method preserves essentially all of its speed advantage over finite-difference methods. A self-consistent higher-order nodal diffusion method was developed and implemented. Nodal methods for global nuclear reactor multigroup diffusion calculations which account explicitly for heterogeneities in the assembly nuclear properties were developed and evaluated. A systematic analysis of the zero-order variable cross section nodal method was conducted. Analyzing the KWU PWR depletion benchmark problem, it is shown that when burnup heterogeneities arise, ordinary nodal methods, which do not explicitly treat the heterogeneities, suffer a significant systematic error that accumulates. A nodal method that treats explicitly the space dependence of diffusion coefficients was developed and implemented. A consistent burnup-correction method for nodal microscopic depletion analysis was developed

  2. An intercomparison of single-tasking and multi-tasking systems for clinical scientific computing

    International Nuclear Information System (INIS)

    Various characteristics of the multi-tasking system used for clinical scientific computers as implemented at UCH have been investigated. It was found, for example, that both individual (Central Processing Unit) usage of tasks, and the overall CPU usage when many users are simultaneously active, are relatively low. This indicates, firstly, that multi-tasking systems are likely to be cost-effective (since there is potential for sharing resources), and secondly, that disk access is probably more important in limiting the performances of such systems than is often realised. Both considering replacement cost and using a simple model of single-user/multi-user system, cost benefits of the order of 10-50% when at least four separate users exist seem to have been realised. This benefit is expressed as the cost saving that has been achieved in providing computer power in that manner, and ignores the question of whether that computer power is used effectively. Considerations of reliability and extendibility have given impetus to the use of multiple CPUs, that is, creating a network. Guidelines for a loosely-connected network using Dijkstra's concept of levels have been developed

  3. PARA'04 Workshop on State-of-the-art in Scientific Computing, June 20-23, 2004: Complementary Proceedings

    DEFF Research Database (Denmark)

    Dongarra, Jack; Madsen, Kaj; Wasniewski, Jerzy

    2004-01-01

    The PARA workshops in the past have been devoted to parallel computing methods in science and technology. There have been seven PARA meetings to date: PARA'94, PARA'95 and PARA'96 in Lyngby, Denmark, PARA'98 in Umeå, Sweden, PARA'2000 in Bergen, Norway, PARA'02 in Espoo, Finland, and PARA'04 again...... in Lyngby, Denmark. The rst six meetings featured lectures in modern numerical algorithms, computer science, engineering, and industrial applications, all in the context of scientific parallel computing. This meeting in the series, the PARA'04 Workshop with the title State of the Art in Scientific...

  4. Deadline aware virtual machine scheduler for scientific grids and cloud computing

    CERN Document Server

    Khalid, Omer; Anthony, Richard; Petridis, Miltos; Parrot, Kevin; Schulz, Markus; 10.1109/WAINA.2010.107

    2010-01-01

    Virtualization technology has enabled applications to be decoupled from the underlying hardware providing the benefits of portability, better control over execution environment and isolation. It has been widely adopted in scientific grids and commercial clouds. Since virtualization, despite its benefits incurs a performance penalty, which could be significant for systems dealing with uncertainty such as High Performance Computing (HPC) applications where jobs have tight deadlines and have dependencies on other jobs before they could run. The major obstacle lies in bridging the gap between performance requirements of a job and performance offered by the virtualization technology if the jobs were to be executed in virtual machines. In this paper, we present a novel approach to optimize job deadlines when run in virtual machines by developing a deadline-aware algorithm that responds to job execution delays in real time, and dynamically optimizes jobs to meet their deadline obligations. Our approaches borrowed co...

  5. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  6. Instruction-Level Characterization of Scientific Computing Applications Using Hardware Performance Counters

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y.; Cameron, K.W.

    1998-11-24

    Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators, which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.

  7. Testing framework for GRASS GIS: ensuring reproducibility of scientific geospatial computing

    Science.gov (United States)

    Petras, V.; Gebbert, S.

    2014-12-01

    GRASS GIS, a free and open source GIS, is used by many scientists directly or through other projects such as R or QGIS to perform geoprocessing tasks. Thus, a large number of scientific geospatial computations depend on quality and correct functionality of GRASS GIS. Automatic functionality testing is therefore necessary to ensure software reliability. Here we present a testing framework for GRASS GIS which addresses different needs of GRASS GIS and geospatial software in general. It allows to test GRASS tools (referred to as GRASS modules) and examine outputs including large raster and vector maps as well as temporal datasets. Furthermore, it enables to test all levels of GRASS GIS architecture including C and Python application programming interface and GRASS modules invoked as subprocesses. Since GRASS GIS is used as a platform for development of geospatial algorithms and models, the testing framework allows not only to test GRASS GIS core functionality but also tools developed by scientists as a part of their research. Using testing framework we can test GRASS GIS and related tools automatically and repetitively and thus detect errors caused by code changes and new developments. Tools and code are then easier to maintain which results in preserving reproducibility of scientific results over time. Similarly to open source code, the test results are publicly accessible, so that all current and potential users can see them. The usage of testing framework will be presented on an example of a test suite for r.slope.aspect module, a tool for computation of terrain slope, aspect, curvatures and other terrain characteristics.

  8. An Advanced Survey on Cloud Computing and State-of-the-art Research Issues

    Directory of Open Access Journals (Sweden)

    Mohiuddin Ahmed

    2012-01-01

    Full Text Available Cloud Computing is considered as one of the emerging arenas of computer science in recent times. It is providing excellent facilities to business entrepreneurs by flexible infrastructure. Although, cloud computing is facilitating the Information Technology industry, the research and development in this arena is yet to be satisfactory. Our contribution in this paper is an advanced survey focusing on cloud computing concept and most advanced research issues. This paper provides a better understanding of the cloud computing and identifies important research issues in this burgeoning area of computer science.

  9. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    Energy Technology Data Exchange (ETDEWEB)

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  10. Nationwide Buildings Energy Research enabled through an integrated Data Intensive Scientific Workflow and Advanced Analysis Environment

    Energy Technology Data Exchange (ETDEWEB)

    Kleese van Dam, Kerstin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lansing, Carina S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elsethagen, Todd O. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hathaway, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Guillen, Zoe C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dirks, James A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Skorski, Daniel C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Stephan, Eric G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gorrissen, Willy J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gorton, Ian [Carnegie Mellon Univ., Pittsburgh, PA (United States); Liu, Yan [Concordia Univ., Montreal, QC (Canada)

    2014-01-28

    Modern workflow systems enable scientists to run ensemble simulations at unprecedented scales and levels of complexity, allowing them to study system sizes previously impossible to achieve, due to the inherent resource requirements needed for the modeling work. However as a result of these new capabilities the science teams suddenly also face unprecedented data volumes that they are unable to analyze with their existing tools and methodologies in a timely fashion. In this paper we will describe the ongoing development work to create an integrated data intensive scientific workflow and analysis environment that offers researchers the ability to easily create and execute complex simulation studies and provides them with different scalable methods to analyze the resulting data volumes. The integration of simulation and analysis environments is hereby not only a question of ease of use, but supports fundamental functions in the correlated analysis of simulation input, execution details and derived results for multi-variant, complex studies. To this end the team extended and integrated the existing capabilities of the Velo data management and analysis infrastructure, the MeDICi data intensive workflow system and RHIPE the R for Hadoop version of the well-known statistics package, as well as developing a new visual analytics interface for the result exploitation by multi-domain users. The capabilities of the new environment are demonstrated on a use case that focusses on the Pacific Northwest National Laboratory (PNNL) building energy team, showing how they were able to take their previously local scale simulations to a nationwide level by utilizing data intensive computing techniques not only for their modeling work, but also for the subsequent analysis of their modeling results. As part of the PNNL research initiative PRIMA (Platform for Regional Integrated Modeling and Analysis) the team performed an initial 3 year study of building energy demands for the US Eastern

  11. Multithreaded transactions in scientific computing. The Growth06_v2 program

    Science.gov (United States)

    Daniluk, Andrzej

    2009-07-01

    efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.

  12. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  13. Modern Trends Of Computation, Simulation, and Communication, And Their Impacts On The Progress Of Scientific And Engineering Research, Development, And Education

    International Nuclear Information System (INIS)

    A short report on the modern trends of computation, simulation, and communication in the 1990s is presented, along with their impacts on the progress of scientific and engineering research, development, and education. A full description of this giant issue is certainly a mission impossiblefor the author. Nevertheless, it is the author's hope that it will at least give an overall view about what is going on in this very dynamic field in the advanced countries. After thinking globallythru reading this report, we should then decide on what and how to act locallyto respond to these global trends. The main source of information reported here were the computational science and engineering journals and books issued during the 1990s as listed in the references below

  14. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    Science.gov (United States)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  15. Advances in Numerical Methods

    CERN Document Server

    Mastorakis, Nikos E

    2009-01-01

    Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.

  16. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, W. Spencer; Koothoor, Mimitha [Computing and Software Department, McMaster University, Hamilton (Canada)

    2016-04-15

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification.

  17. III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty. These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods i...

  18. II - Template Metaprogramming for Massively Parallel Scientific Computing - Vectorization with Expression Templates

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty. These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods i...

  19. The Advance of Computing from the Ground to the Cloud

    Science.gov (United States)

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  20. Advances in soft computing, intelligent robotics and control

    CERN Document Server

    Fullér, Robert

    2014-01-01

    Soft computing, intelligent robotics and control are in the core interest of contemporary engineering. Essential characteristics of soft computing methods are the ability to handle vague information, to apply human-like reasoning, their learning capability, and ease of application. Soft computing techniques are widely applied in the control of dynamic systems, including mobile robots. The present volume is a collection of 20 chapters written by respectable experts of the fields, addressing various theoretical and practical aspects in soft computing, intelligent robotics and control. The first part of the book concerns with issues of intelligent robotics, including robust xed point transformation design, experimental verification of the input-output feedback linearization of differentially driven mobile robot and applying kinematic synthesis to micro electro-mechanical systems design. The second part of the book is devoted to fundamental aspects of soft computing. This includes practical aspects of fuzzy rule ...

  1. Advancing Scientific Reasoning in Upper Elementary Classrooms: Direct Instruction Versus Task Structuring

    NARCIS (Netherlands)

    Lazonder, A.W.; Wiskerke-Drost, Sjanou

    2015-01-01

    Several studies found that direct instruction and task structuring can effectively promote children’s ability to design unconfounded experiments. The present study examined whether the impact of these interventions extends to other scientific reasoning skills by comparing the inquiry activities of 5

  2. 2015 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  3. 2014 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  4. Tutorial on Computing: Technological Advances, Social Implications, Ethical and Legal Issues

    OpenAIRE

    Debnath, Narayan

    2012-01-01

    Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming lo...

  5. Advances in Physarum machines sensing and computing with Slime mould

    CERN Document Server

    2016-01-01

    This book is devoted to Slime mould Physarum polycephalum, which is a large single cell capable for distributed sensing, concurrent information processing, parallel computation and decentralized actuation. The ease of culturing and experimenting with Physarum makes this slime mould an ideal substrate for real-world implementations of unconventional sensing and computing devices The book is a treatise of theoretical and experimental laboratory studies on sensing and computing properties of slime mould, and on the development of mathematical and logical theories of Physarum behavior. It is shown how to make logical gates and circuits, electronic devices (memristors, diodes, transistors, wires, chemical and tactile sensors) with the slime mould. The book demonstrates how to modify properties of Physarum computing circuits with functional nano-particles and polymers, to interface the slime mould with field-programmable arrays, and to use Physarum as a controller of microbial fuel cells. A unique multi-agent model...

  6. Building an Advanced Computing Environment with SAN Support

    Institute of Scientific and Technical Information of China (English)

    DajianYANG; MeiMA; 等

    2001-01-01

    The current computing environment of our Computing Center in IHEP uses a SAS (server Attached Storage)architecture,attaching all the storage devices directly to the machines.This kind of storage strategy can't meet the requirement of our BEPC II/BESⅢ project properly.Thus we design and implement a SAN-based computing environment,which consists of several computing farms,a three-level storage pool,a set of storage management software and a web-based data management system.The feature of ours system includes cross-platform data sharing,fast data access,high scalability,convenient storage management and data management.

  7. Center for Advanced Energy Studies: Computer Assisted Virtual Environment (CAVE)

    Data.gov (United States)

    Federal Laboratory Consortium — The laboratory contains a four-walled 3D computer assisted virtual environment - or CAVE TM — that allows scientists and engineers to literally walk into their data...

  8. Advances in computational design and analysis of airbreathing propulsion systems

    Science.gov (United States)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  9. Advancements in Violin-Related Human-Computer Interaction

    DEFF Research Database (Denmark)

    Overholt, Daniel

    2014-01-01

    of human intelligence and emotion is at the core of the Musical Interface Technology Design Space, MITDS. This is a framework that endeavors to retain and enhance such traits of traditional instruments in the design of interactive live performance interfaces. Utilizing the MITDS, advanced Human...

  10. Advanced Modulation Techniques for High-Performance Computing Optical Interconnects

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko;

    2013-01-01

    optical shared memory supercomputer interconnect system switch fabric. In particular, we investigate the resilience of the aforementioned advanced modulation formats to the nonlinearities of semiconductor optical amplifiers, used as ON/OFF gates in the supercomputer optical switch fabric under study. In...

  11. New Sensors for In-Pile Temperature Detection at the Advanced Test Reactor National Scientific User Facility

    Energy Technology Data Exchange (ETDEWEB)

    J. L. Rempe; D. L. Knudson; J. E. Daw; K. G. Condie; S. Curtis Wilkins

    2009-09-01

    The Department of Energy (DOE) designated the Advanced Test Reactor (ATR) as a National Scientific User Facility (NSUF) in April 2007 to support U.S. leadership in nuclear science and technology. As a user facility, the ATR is supporting new users from universities, laboratories, and industry, as they conduct basic and applied nuclear research and development to advance the nation’s energy security needs. A key component of the ATR NSUF effort is to develop and evaluate new in-pile instrumentation techniques that are capable of providing measurements of key parameters during irradiation. This paper describes the strategy for determining what instrumentation is needed and the program for developing new or enhanced sensors that can address these needs. Accomplishments from this program are illustrated by describing new sensors now available and under development for in-pile detection of temperature at various irradiation locations in the ATR.

  12. New Sensors for In-Pile Temperature Detection at the Advanced Test Reactor National Scientific User Facility

    International Nuclear Information System (INIS)

    The Department of Energy (DOE) designated the Advanced Test Reactor (ATR) as a National Scientific User Facility (NSUF) in April 2007 to support U.S. leadership in nuclear science and technology. As a user facility, the ATR is supporting new users from universities, laboratories, and industry, as they conduct basic and applied nuclear research and development to advance the nation's energy security needs. A key component of the ATR NSUF effort is to develop and evaluate new in-pile instrumentation techniques that are capable of providing measurements of key parameters during irradiation. This paper describes the strategy for determining what instrumentation is needed and the program for developing new or enhanced sensors that can address these needs. Accomplishments from this program are illustrated by describing new sensors now available and under development for in-pile detection of temperature at various irradiation locations in the ATR.

  13. The use of advanced computer simulation in structural design

    Energy Technology Data Exchange (ETDEWEB)

    Field, C.J.; Mole, A. [Arup, San Fransisco, CA (United States); Arkinstall, M. [Arup, Sydney (Australia)

    2005-07-01

    The benefits that can be gained from the application of advanced numerical simulation in building design were discussed. A review of current practices in structural engineering was presented along with an illustration of a range of international project case studies. Structural engineers use analytical methods to evaluate both static and dynamic loads. Structural design is prescribed by a range of building codes, depending on location, building type and loading, but often, buildings do not fit well within the codes, particularly if one wants to take advantage of new technologies and developments in design that are not covered by the code. Advanced simulation refers to the use of mathematical modeling to complex problems to allow a wider consideration of building types and conditions that can be designed reliably using standard practices. Advanced simulation is used to address virtual testing and prototyping, verifying innovative design ideas, forensic engineering, and design optimization. The benefits of advanced simulation include enhanced creativity, improved performance, cost savings, risk management, sustainable design solutions, and better communication. The following 5 case studies illustrated the value gained by using advanced simulation as an integral part of the design process: the earthquake resistant Maison Hermes in Tokyo; the seismic resistant braces known as the Unbonded Brace for use in the United States; a simulation of the existing Disney Museum to evaluate its capacity to resist earthquakes; simulation of the MIT Brain and Cognitive Science Project to evaluate the effect of different foundation types on the vibration entering the building; and, the Beijing Aquatic Center whose design was streamlined by optimized structural analysis. It was suggested that industry should encourage the transfer of technology from other professions and should try to collaborate towards a global building model to construct buildings in a more efficient manner. 7 refs

  14. Lost in Translation: The Gap in Scientific Advancements and Clinical Application

    OpenAIRE

    Fernandez-Moure, Joseph S.

    2016-01-01

    The evolution of medicine and medical technology hinges on the successful translation of basic science research from the bench to clinical implementation at the bedside. Out of the increasing need to facilitate the transfer of scientific knowledge to patients, translational research has emerged. Significant leaps in improving global health, such as antibiotics, vaccinations, and cancer therapies, have all seen successes under this paradigm, yet today, it has become increasingly difficult to r...

  15. Lost in Translation: The Gap in Scientific Advancements and Clinical Application

    OpenAIRE

    Joseph eFernandez-Moure

    2016-01-01

    The evolution of medicine and medical technology hinges on the successful translation of basic science research from the bench to clinical implementation at the bedside. Born out of the increasing need to facilitate the transfer of scientific knowledge to patients, translational research has emerged. Significant leaps in improving global health such as antibiotics, vaccinations, and cancer therapies have all seen successes under this paradigm yet today it has become increasingly difficult to ...

  16. A Computationally Based Approach to Homogenizing Advanced Alloys

    Energy Technology Data Exchange (ETDEWEB)

    Jablonski, P D; Cowen, C J

    2011-02-27

    We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.

  17. Advances in a computer aided bilateral manipulator system

    International Nuclear Information System (INIS)

    This paper relates developments and experiments carried at Saclay in the frame of ARA/sup b/ program by the computer aided teleoperation (CAT) group. The goal is to improve efficiency and operational safety of remote operations using computer and sensors. They enable to substitute to the to the operator(s) in time sharing and/or in parallel, and augment amount and/or quality of sensory feedback. After describing the test facility in Saclay, the developments of various participants are described. Result of this work will be commercially available with the MA23M and future MAE 200 at La Calhene (France, UK, Japan)

  18. Recent advances in computer modelling of granular systems

    OpenAIRE

    Jullien, R.; Meakin, P.; Pavlovitch, A.

    1993-01-01

    We present simple computer algorithms able to build random packings of spheres using the ballistic deposition model and we show how they can be used to investigate several size segregation phenomena occuring in granular systems : 1) penetration of a small sphere in a packing of large ones, 2) size-segregation in the formation of a heap or when pouring a silo, 3) size-segregation by shaking. In the last case, the computer simulation provides a very simple geometrical explanation of the phenome...

  19. Advanced Micro Optics Characterization Using Computer Generated Holograms

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, S.; Maxey, L.C.; Moreshead, W.; Nogues, J.L.

    1998-11-01

    This CRADA has enabled the validation of Computer Generated Holograms (CGH) testing for certain classes of micro optics. It has also identified certain issues that are significant when considering the use of CGHs in this application. Both contributions are advantageous in the pursuit of better manufacturing and testing technologies for these important optical components.

  20. Computing support for advanced medical data analysis and imaging

    CERN Document Server

    Wiślicki, W; Białas, P; Czerwiński, E; Kapłon, Ł; Kochanowski, A; Korcyl, G; Kowal, J; Kowalski, P; Kozik, T; Krzemień, W; Molenda, M; Moskal, P; Niedźwiecki, S; Pałka, M; Pawlik, M; Raczyński, L; Rudy, Z; Salabura, P; Sharma, N G; Silarski, M; Słomski, A; Smyrski, J; Strzelecki, A; Wieczorek, A; Zieliński, M; Zoń, N

    2014-01-01

    We discuss computing issues for data analysis and image reconstruction of PET-TOF medical scanner or other medical scanning devices producing large volumes of data. Service architecture based on the grid and cloud concepts for distributed processing is proposed and critically discussed.

  1. Advanced Simulation and Computing Co-Design Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Ang, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoang, Thuc T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); McPherson, Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Neely, Rob [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  2. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  3. ADVANCED FILE BASED SECURITY MECHANISM IN CLOUD COMPUTING: A REVIEW

    Directory of Open Access Journals (Sweden)

    Nisha Nisha

    2015-04-01

    Full Text Available Cloud computing is a broad solution that delivers IT as a service. Cloud computing uses the internet and the central remote servers to support different data and applications. It is an internet based technology. It permits the users to approach their personal files at any computer with internet access. The cloud computing flexibility is a function of the allocation of resources on authority’s request. It represents all the complexities of the network which may include everything from cables, routers, servers, data centers and all such other devices. Cloud based systems saves data off multiple organizations on shared hardware systems. In this paper the attempt to secure data from unauthorized access. The Method of data security is AES algorithm for providing data security by encrypting the given data based on the AES. It is based on a design principle known as a substitution-permutation network, and is fast in both Software and Hardware. The algorithms used in AES are so simple that they can be easily implemented using heap processors and a minimum amount of memory and this data then can only be decrypted by authorized person by using his private key. 

  4. Workshop on Advancing Experimental Rock Deformation Research: Scientific and Technical Needs

    Energy Technology Data Exchange (ETDEWEB)

    Tullis, Terry E. [Brown Univ., Providence, RI (United States)

    2016-05-31

    A workshop for the experimental rock deformation community was held in Boston on August 16-19, 2012, following some similar but smaller preliminary meetings. It was sponsored primarily by the NSF, with additional support from the DOE, the SCEC, and in-kind support by the USGS. A white paper summarizing the active discussions at the workshop and the outcomes is available (https://brownbox.brown.edu/download.php?hash=0b854d11). Those attending included practitioners of experimental rock deformation, i.e., those who conduct laboratory experiments, as well as users of the data provided by practitioners, namely field geologists, seismologists, geodynamicists, earthquake modelers, and scientists from the oil and gas industry. A considerable fraction of those attending were early-career scientists. The discussion initially focused on identifying the most important unsolved scientific problems in all of the research areas represented by the users that experiments would help solve. This initial session was followed by wide-ranging discussions of the most critical problems faced by practitioners, particularly by early-career scientists. The discussion also focused on the need for designing and building the next generation of experimental rock deformation equipment required to meet the identified scientific challenges. The workshop participants concluded that creation of an experimental rock deformation community organization is needed to address many of the scientific, technical, and demographic problems faced by this community. A decision was made to hold an organizational meeting of this new organization in San Francisco on December 1-2, 2012, just prior to the Fall Meeting of the AGU. The community has decided to name this new organization “Deformation Experimentation at the Frontier Of Rock and Mineral research” or DEFORM. As of May 1, 2013, 64 institutions have asked to be members of DEFORM.

  5. Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karbach, Carsten; Frings, Wolfgang

    2013-02-20

    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP. The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute

  6. Toward a common component architecture for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, R; Gannon, D; Geist, A; Katarzyna, K; Kohn, S; McInnes, L; Parker, S; Smolinski, B

    1999-06-09

    This paper describes work in progress to develop a standard for interoperability among high-performance scientific components. This research stems from growing recognition that the scientific community must better manage the complexity of multidisciplinary simulations and better address scalable performance issues on parallel and distributed architectures. Driving forces are the need for fast connections among components that perform numerically intensive work and parallel collective interactions among components that use multiple processes or threads. This paper focuses on the areas we believe are most crucial for such interactions, namely an interface definition language that supports scientific abstractions for specifying component interfaces and a ports connection model for specifying component interactions.

  7. Integrated Computer Aided Planning and Manufacture of Advanced Technology Jet Engines

    Directory of Open Access Journals (Sweden)

    B. K. Subhas

    1987-10-01

    Full Text Available This paper highlights an attempt at evolving a computer aided manufacturing system on a personal computer. A case study of an advanced technology jet engine component is included to illustrate various outputs from the system. The proposed system could be an alternate solution to sophisticated and expensive CAD/CAM workstations.

  8. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    Science.gov (United States)

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  9. Advances in bio-inspired computing for combinatorial optimization problems

    CERN Document Server

    Pintea, Camelia-Mihaela

    2013-01-01

    Advances in Bio-inspired Combinatorial Optimization Problems' illustrates several recent bio-inspired efficient algorithms for solving NP-hard problems.Theoretical bio-inspired concepts and models, in particular for agents, ants and virtual robots are described. Large-scale optimization problems, for example: the Generalized Traveling Salesman Problem and the Railway Traveling Salesman Problem, are solved and their results are discussed.Some of the main concepts and models described in this book are: inner rule to guide ant search - a recent model in ant optimization, heterogeneous sensitive a

  10. Do scientific advancements lean on the shoulders of giants? A bibliometric investigation of the Ortega hypothesis.

    Directory of Open Access Journals (Sweden)

    Lutz Bornmann

    Full Text Available BACKGROUND: In contrast to Newton's well-known aphorism that he had been able "to see further only by standing on the shoulders of giants," one attributes to the Spanish philosopher Ortega y Gasset the hypothesis saying that top-level research cannot be successful without a mass of medium researchers on which the top rests comparable to an iceberg. METHODOLOGY/PRINCIPAL FINDINGS: The Ortega hypothesis predicts that highly-cited papers and medium-cited (or lowly-cited papers would equally refer to papers with a medium impact. The Newton hypothesis would be supported if the top-level research more frequently cites previously highly-cited work than that medium-level research cites highly-cited work. Our analysis is based on (i all articles and proceedings papers which were published in 2003 in the life sciences, health sciences, physical sciences, and social sciences, and (ii all articles and proceeding papers which were cited within these publications. The results show that highly-cited work in all scientific fields more frequently cites previously highly-cited papers than that medium-cited work cites highly-cited work. CONCLUSIONS/SIGNIFICANCE: We demonstrate that papers contributing to the scientific progress in a field lean to a larger extent on previously important contributions than papers contributing little. These findings support the Newton hypothesis and call into question the Ortega hypothesis (given our usage of citation counts as a proxy for impact.

  11. Scientific Advances in the Diagnosis of Psychopathology: Introduction to the Special Section

    OpenAIRE

    Smith, Gregory T.; Oltmanns, Thomas F.

    2009-01-01

    Work is currently underway on the fifth edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM). Each new version of the manual reflects, in part, the progress in the understanding of psychopathology that has been accomplished since the previous version. This special section gathers summaries of several relevant advances of particular relevance for the Diagnostic and Statistical Manual of Mental Disorders revision process and, more general...

  12. Recent advances in swarm intelligence and evolutionary computation

    CERN Document Server

    2015-01-01

    This timely review volume summarizes the state-of-the-art developments in nature-inspired algorithms and applications with the emphasis on swarm intelligence and bio-inspired computation. Topics include the analysis and overview of swarm intelligence and evolutionary computation, hybrid metaheuristic algorithms, bat algorithm, discrete cuckoo search, firefly algorithm, particle swarm optimization, and harmony search as well as convergent hybridization. Application case studies have focused on the dehydration of fruits and vegetables by the firefly algorithm and goal programming, feature selection by the binary flower pollination algorithm, job shop scheduling, single row facility layout optimization, training of feed-forward neural networks, damage and stiffness identification, synthesis of cross-ambiguity functions by the bat algorithm, web document clustering, truss analysis, water distribution networks, sustainable building designs and others. As a timely review, this book can serve as an ideal reference f...

  13. Towards Advanced Data Analysis by Combining Soft Computing and Statistics

    CERN Document Server

    Gil, María; Sousa, João; Verleysen, Michel

    2013-01-01

    Soft computing, as an engineering science, and statistics, as a classical branch of mathematics, emphasize different aspects of data analysis. Soft computing focuses on obtaining working solutions quickly, accepting approximations and unconventional approaches. Its strength lies in its flexibility to create models that suit the needs arising in applications. In addition, it emphasizes the need for intuitive and interpretable models, which are tolerant to imprecision and uncertainty. Statistics is more rigorous and focuses on establishing objective conclusions based on experimental data by analyzing the possible situations and their (relative) likelihood. It emphasizes the need for mathematical methods and tools to assess solutions and guarantee performance. Combining the two fields enhances the robustness and generalizability of data analysis methods, while preserving the flexibility to solve real-world problems efficiently and intuitively.

  14. Advances in Computer Science and Information Engineering Volume 2

    CERN Document Server

    Lin, Sally

    2012-01-01

    CSIE2012 is an integrated conference concentrating its focus on Computer Science and Information Engineering . In the proceeding, you can learn much more knowledge about Computer Science and Information Engineering of researchers from all around the world. The main role of the proceeding is to be used as an exchange pillar for researchers who are working in the mentioned fields. In order to meet the high quality of Springer, AISC series, the organization committee has made their efforts to do the following things. Firstly, poor quality paper has been refused after reviewing course by anonymous referee experts. Secondly, periodically review meetings have been held around the reviewers about five times for exchanging reviewing suggestions. Finally, the conference organizers had several preliminary sessions before the conference. Through efforts of different people and departments, the conference will be successful and fruitful.

  15. Advances in Computer Science and Information Engineering Volume 1

    CERN Document Server

    Lin, Sally

    2012-01-01

    CSIE2012 is an integrated conference concentrating its focus on Computer Science and Information Engineering . In the proceeding, you can learn much more knowledge about Computer Science and Information Engineering of researchers from all around the world. The main role of the proceeding is to be used as an exchange pillar for researchers who are working in the mentioned fields. In order to meet the high quality of Springer, AISC series, the organization committee has made their efforts to do the following things. Firstly, poor quality paper has been refused after reviewing course by anonymous referee experts. Secondly, periodically review meetings have been held around the reviewers about five times for exchanging reviewing suggestions. Finally, the conference organizers had several preliminary sessions before the conference. Through efforts of different people and departments, the conference will be successful and fruitful.

  16. The ergonomics of computer aided design within advanced manufacturing technology.

    Science.gov (United States)

    John, P A

    1988-03-01

    Many manufacturing companies have now awakened to the significance of computer aided design (CAD), although the majority of them have only been able to purchase computerised draughting systems of which only a subset produce direct manufacturing data. Such companies are moving steadily towards the concept of computer integrated manufacture (CIM), and this demands CAD to address more than draughting. CAD architects are thus having to rethink the basic specification of such systems, although they typically suffer from an insufficient understanding of the design task and have consequently been working with inadequate specifications. It is at this fundamental level that ergonomics has much to offer, making its contribution by encouraging user-centred design. The discussion considers the relationships between CAD and: the design task; the organisation and people; creativity; and artificial intelligence. It finishes with a summary of the contribution of ergonomics. PMID:15676646

  17. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    OpenAIRE

    Vladimiras Dolgopolovas; Valentina Dagienė; Saulius Minkevičius; Leonidas Sakalauskas

    2015-01-01

    The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same ti...

  18. Foreword: Advanced Science Letters (ASL), Special Issue on Computational Astrophysics

    OpenAIRE

    Mayer, Lucio

    2009-01-01

    Computational astrophysics has undergone unprecedented development over the last decade, becoming a field of its own. The challenge ahead of us will involve increasingly complex multi-scale simulations. These will bridge the gap between areas of astrophysics such as star and planet formation, or star formation and galaxy formation, that have evolved separately until today. A global knowledge of the physics and modeling techniques of astrophysical simulations is thus an important asset for the...

  19. Identification of Enhancers In Human: Advances In Computational Studies

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2016-03-24

    Roughly ~50% of the human genome, contains noncoding sequences serving as regulatory elements responsible for the diverse gene expression of the cells in the body. One very well studied category of regulatory elements is the category of enhancers. Enhancers increase the transcriptional output in cells through chromatin remodeling or recruitment of complexes of binding proteins. Identification of enhancer using computational techniques is an interesting area of research and up to now several approaches have been proposed. However, the current state-of-the-art methods face limitations since the function of enhancers is clarified, but their mechanism of function is not well understood. This PhD thesis presents a bioinformatics/computer science study that focuses on the problem of identifying enhancers in different human cells using computational techniques. The dissertation is decomposed into four main tasks that we present in different chapters. First, since many of the enhancer’s functions are not well understood, we study the basic biological models by which enhancers trigger transcriptional functions and we survey comprehensively over 30 bioinformatics approaches for identifying enhancers. Next, we elaborate more on the availability of enhancer data as produced by different enhancer identification methods and experimental procedures. In particular, we analyze advantages and disadvantages of existing solutions and we report obstacles that require further consideration. To mitigate these problems we developed the Database of Integrated Human Enhancers (DENdb), a centralized online repository that archives enhancer data from 16 ENCODE cell-lines. The integrated enhancer data are also combined with many other experimental data that can be used to interpret the enhancers content and generate a novel enhancer annotation that complements the existing integrative annotation proposed by the ENCODE consortium. Next, we propose the first deep-learning computational

  20. International Space Station Accomplishments Update: Scientific Discovery, Advancing Future Exploration, and Benefits Brought Home to Earth

    Science.gov (United States)

    Thumm, Tracy; Robinson, Julie A.; Alleyne, Camille; Hasbrook, Pete; Mayo, Susan; Johnson-Green, Perry; Buckley, Nicole; Karabadzhak, George; Kamigaichi, Shigeki; Umemura, Sayaka; Sorokin, Igor V.; Zell, Martin; Istasse, Eric; Sabbagh, Jean; Pignataro, Salvatore

    2013-01-01

    Throughout the history of the International Space Station (ISS), crews on board have conducted a variety of scientific research and educational activities. Well into the second year of full utilization of the ISS laboratory, the trend of scientific accomplishments and educational opportunities continues to grow. More than 1500 investigations have been conducted on the ISS since the first module launched in 1998, with over 700 scientific publications. The ISS provides a unique environment for research, international collaboration and educational activities that benefit humankind. This paper will provide an up to date summary of key investigations, facilities, publications, and benefits from ISS research that have developed over the past year. Discoveries in human physiology and nutrition have enabled astronauts to return from ISS with little bone loss, even as scientists seek to better understand the new puzzle of "ocular syndrome" affecting the vision of up to half of astronauts. The geneLAB campaign will unify life sciences investigations to seek genomic, proteomic, and metabolomics of the effect of microgravity on life as a whole. Combustion scientists identified a new "cold flame" phenomenon that has the potential to improve models of efficient combustion back on Earth. A significant number of instruments in Earth remote sensing and astrophysics are providing new access to data or nearing completion for launch, making ISS a significant platform for understanding of the Earth system and the universe. In addition to multidisciplinary research, the ISS partnership conducts a myriad of student led research investigations and educational activities aimed at increasing student interest in science, technology, engineering and mathematics (STEM). Over the past year, the ISS partnership compiled new statistics of the educational impact of the ISS on students around the world. More than 43 million students, from kindergarten to graduate school, with more than 28 million

  1. An Overview of Advances of Pattern Recognition Systems in Computer Vision

    OpenAIRE

    Kpalma, Kidiyo; Ronsin, Joseph

    2007-01-01

    As mentioned before, pattern recognition does not appear as a new problem. A lot of studies have been performed on this scientific field and a lot of works are currently developed. Pattern recognition is a wide topic in machine learning. It aims to classify a pattern into one of a number of classes. It appears in various fields like psychology, agriculture, computer vision, robotics , biometrics... With technological improvements and growing performances of computer science, its application f...

  2. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Kevin L., E-mail: kevinmoore@ucsd.edu; Moiseenko, Vitali [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92093 (United States); Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504 (Greece); McNutt, Todd R. [Department of Radiation Oncology and Molecular Radiation Science, School of Medicine, Johns Hopkins University, Baltimore, Maryland 21231 (United States); Mutic, Sasa [Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri 63110 (United States)

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  3. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    International Nuclear Information System (INIS)

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy

  4. More Than Pretty Pictures: How Translating Science Concepts into Pictures Advances Scientific Thinking

    Science.gov (United States)

    Frankel, Felice

    2010-02-01

    The judgment and decision-making required to render science visual clarifies thinking. One must decide on a hierarchy of information--what must be included and what might be left out? What is the main point of the visual? Just as in writing an article or responding to an essay question, we must understand and then plan what we want to ``say'' in a drawing or other forms of representation. And since a visual representation of a scientific concept (or data) is a re-presentation, and not the thing itself, interpretation or translation is involved. The process tends to transcend barriers of linguistic facility and educational background; it attracts and communicates students and teachers of all backgrounds, where other methods intimidate. The rendered images are, in essence More Than Pretty Pictures. )

  5. Advanced Computational Models for Accelerator-Driven Systems

    International Nuclear Information System (INIS)

    In the nuclear engineering scientific community, Accelerator Driven Systems (ADSs) have been proposed and investigated for the transmutation of nuclear waste, especially plutonium and minor actinides. These fuels have a quite low effective delayed neutron fraction relative to uranium fuel, therefore the subcriticality of the core offers a unique safety feature with respect to critical reactors. The intrinsic safety of ADS allows the elimination of the operational control rods, hence the reactivity excess during burnup can be managed by the intensity of the proton beam, fuel shuffling, and eventually by burnable poisons. However, the intrinsic safety of a subcritical system does not guarantee that ADSs are immune from severe accidents (core melting), since the decay heat of an ADS is very similar to the one of a critical system. Normally, ADSs operate with an effective multiplication factor between 0.98 and 0.92, which means that the spallation neutron source contributes little to the neutron population. In addition, for 1 GeV incident protons and lead-bismuth target, about 50% of the spallation neutrons has energy below 1 MeV and only 15% of spallation neutrons has energies above 3 MeV. In the light of these remarks, the transmutation performances of ADS are very close to those of critical reactors.

  6. NPP advanced pipework technologies: recent backfitting projects and computational analyses

    International Nuclear Information System (INIS)

    Some recent international NPP projects involving advanced engineering services for existing installations, such as replacement of valves, containment penetrations and pipework as well as design and installations of pipe supports and dynamic restraints, are summarized. Examples of thermomechanical analyses of operational phenomena performed as part of comprehensive plant design, licensing, and commissioning support activities are presented. Turnkey project management with system function warranty resulted in most effective use of all resources drawn together from several high-qualified subcontractors and international equipment manufacturers working under the supervision of an in-house team. Experience collected to date in backfitting various plants of different design and age provides a strong knowledge basis. It is available for evaluating any plant currently in operation or under construction in order to check the need for modifications and recommend the appropriate scheduling and level of effort. (authors)

  7. Advances in Computational Social Science and Social Simulation

    OpenAIRE

    Miguel Quesada, Francisco J.; Amblard, Frédéric; Juan A. Barceló; Madella, Marco; Aguirre, Cristián; Ahrweiler, Petra; Aldred, Rachel; Ali Abbas, Syed Muhammad; Lopez Rojas, Edgar Alonso; Alonso Betanzos, Amparo; Alvarez Galvez, Javier; Andrighetto, Giulia; Antunes, Luis; Araghi, Yashar; Asatani, Kimitaka

    2014-01-01

    Aquesta conferència és la celebració conjunta de la "10th Artificial Economics Conference AE", la "10th Conference of the European Social Simulation Association ESSA" i la "1st Simulating the Past to Understand Human History SPUHH". Conferència organitzada pel Laboratory for Socio­-Historical Dynamics Simulation (LSDS-­UAB) de la Universitat Autònoma de Barcelona. Readers will find results of recent research on computational social science and social simulation economics, management, so...

  8. Advanced and intelligent computations in diagnosis and control

    CERN Document Server

    2016-01-01

    This book is devoted to the demands of research and industrial centers for diagnostics, monitoring and decision making systems that result from the increasing complexity of automation and systems, the need to ensure the highest level of reliability and safety, and continuing research and the development of innovative approaches to fault diagnosis. The contributions combine domains of engineering knowledge for diagnosis, including detection, isolation, localization, identification, reconfiguration and fault-tolerant control. The book is divided into six parts:  (I) Fault Detection and Isolation; (II) Estimation and Identification; (III) Robust and Fault Tolerant Control; (IV) Industrial and Medical Diagnostics; (V) Artificial Intelligence; (VI) Expert and Computer Systems.

  9. New Sensors for the Advanced Test Reactor National Scientific User Facility

    International Nuclear Information System (INIS)

    A key component of the ATR NSUF effort is to develop and evaluate new in-pile instrumentation techniques that are capable of providing real-time measurements of key parameters during irradiation. This paper describes the selection strategy of what instrumentation is needed, and the program generated for developing new or enhanced sensors that can address these needs. Accomplishments from this program are illustrated by describing new sensors now available to users of the ATR NSUF with data from irradiation tests using these sensors. In addition, progress is reported on current research efforts to provide users advanced methods for detecting temperature, fuel thermal conductivity, and changes in sample geometry

  10. Advanced separation and transmutation, long dated behavior of vitrified wastes: 15 years of scientific researches

    International Nuclear Information System (INIS)

    This report presents the results after 15 years of researches at the Cea, concerning the separation and transmutation of radioactive wastes and the conditioning and the long time storage of wastes at the surface. These researches were asked in the framework of the Bataille law. The first part devoted to the transmutation and separation of ling life radioactive elements presents the challenges, the advanced separation, the transmutation and the evaluation of the researches. The second part devoted to the long dated storage discusses the high activity wastes vitrification, the behavior of the vitrified wastes packages after thousand years, the international researches and the evaluation of the researches. (A.L.B.)

  11. N286.7-99, A Canadian standard specifying software quality management system requirements for analytical, scientific, and design computer programs and its implementation at AECL

    International Nuclear Information System (INIS)

    Analytical, scientific, and design computer programs (referred to in this paper as 'scientific computer programs') are developed for use in a large number of ways by the user-engineer to support and prove engineering calculations and assumptions. These computer programs are subject to frequent modifications inherent in their application and are often used for critical calculations and analysis relative to safety and functionality of equipment and systems. N286.7-99(4) was developed to establish appropriate quality management system requirements to deal with the development, modification, and application of scientific computer programs. N286.7-99 provides particular guidance regarding the treatment of legacy codes

  12. Computational Efforts in Support of Advanced Coal Research

    Energy Technology Data Exchange (ETDEWEB)

    Suljo Linic

    2006-08-17

    The focus in this project was to employ first principles computational methods to study the underlying molecular elementary processes that govern hydrogen diffusion through Pd membranes as well as the elementary processes that govern the CO- and S-poisoning of these membranes. Our computational methodology integrated a multiscale hierarchical modeling approach, wherein a molecular understanding of the interactions between various species is gained from ab-initio quantum chemical Density Functional Theory (DFT) calculations, while a mesoscopic statistical mechanical model like Kinetic Monte Carlo is employed to predict the key macroscopic membrane properties such as permeability. The key developments are: (1) We have coupled systematically the ab initio calculations with Kinetic Monte Carlo (KMC) simulations to model hydrogen diffusion through the Pd based-membranes. The predicted tracer diffusivity of hydrogen atoms through the bulk of Pd lattice from KMC simulations are in excellent agreement with experiments. (2) The KMC simulations of dissociative adsorption of H{sub 2} over Pd(111) surface indicates that for thin membranes (less than 10{micro} thick), the diffusion of hydrogen from surface to the first subsurface layer is rate limiting. (3) Sulfur poisons the Pd surface by altering the electronic structure of the Pd atoms in the vicinity of the S atom. The KMC simulations indicate that increasing sulfur coverage drastically reduces the hydrogen coverage on the Pd surface and hence the driving force for diffusion through the membrane.

  13. An application of computer algebra system CADABRA to scientific problems of physics

    International Nuclear Information System (INIS)

    In this article we present two examples solved in a new problem-oriented computer algebra system Cadabra. Solution of the same examples in widespread universal computer algebra system Maple turns out to be more difficult

  14. Advances in x-ray computed microtomography at the NSLS

    International Nuclear Information System (INIS)

    The X-Ray Computed Microtomography workstation at beamline X27A at the NSLS has been utilized by scientists from a broad range of disciplines from industrial materials processing to environmental science. The most recent applications are presented here as well as a description of the facility that has evolved to accommodate a wide variety of materials and sample sizes. One of the most exciting new developments reported here resulted from a pursuit of faster reconstruction techniques. A Fast Filtered Back Transform (FFBT) reconstruction program has been developed and implemented, that is based on a refinement of the gridding algorithm first developed for use with radio astronomical data. This program has reduced the reconstruction time to 8.5 sec for a 929 x 929 pixel2 slice on an R10,000 CPU, more than 8x reduction compared with the Filtered Back-Projection method

  15. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  16. ADVANCES IN X-RAY COMPUTED MICROTOMOGRAPHY AT THE NSLS.

    Energy Technology Data Exchange (ETDEWEB)

    DOWD,B.A.

    1998-08-07

    The X-Ray Computed Microtomography workstation at beamline X27A at the NSLS has been utilized by scientists from a broad range of disciplines from industrial materials processing to environmental science. The most recent applications are presented here as well as a description of the facility that has evolved to accommodate a wide variety of materials and sample sizes. One of the most exciting new developments reported here resulted from a pursuit of faster reconstruction techniques. A Fast Filtered Back Transform (FFBT) reconstruction program has been developed and implemented, that is based on a refinement of the ''gridding'' algorithm first developed for use with radio astronomical data. This program has reduced the reconstruction time to 8.5 sec for a 929 x 929 pixel{sup 2} slice on an R10,000 CPU, more than 8x reduction compared with the Filtered Back-Projection method.

  17. Advances in x-ray computed microtomography at the NSLS

    Energy Technology Data Exchange (ETDEWEB)

    Dowd, B.A.; Andrews, A.B.; Marr, R.B.; Siddons, D.P.; Jones, K.W.; Peskin, A.M.

    1998-08-01

    The X-Ray Computed Microtomography workstation at beamline X27A at the NSLS has been utilized by scientists from a broad range of disciplines from industrial materials processing to environmental science. The most recent applications are presented here as well as a description of the facility that has evolved to accommodate a wide variety of materials and sample sizes. One of the most exciting new developments reported here resulted from a pursuit of faster reconstruction techniques. A Fast Filtered Back Transform (FFBT) reconstruction program has been developed and implemented, that is based on a refinement of the gridding algorithm first developed for use with radio astronomical data. This program has reduced the reconstruction time to 8.5 sec for a 929 x 929 pixel{sup 2} slice on an R10,000 CPU, more than 8x reduction compared with the Filtered Back-Projection method.

  18. Recent advances in computational intelligence in defense and security

    CERN Document Server

    Falcon, Rafael; Zincir-Heywood, Nur; Abbass, Hussein

    2016-01-01

    This volume is an initiative undertaken by the IEEE Computational Intelligence Society’s Task Force on Security, Surveillance and Defense to consolidate and disseminate the role of CI techniques in the design, development and deployment of security and defense solutions. Applications range from the detection of buried explosive hazards in a battlefield to the control of unmanned underwater vehicles, the delivery of superior video analytics for protecting critical infrastructures or the development of stronger intrusion detection systems and the design of military surveillance networks. Defense scientists, industry experts, academicians and practitioners alike will all benefit from the wide spectrum of successful applications compiled in this volume. Senior undergraduate or graduate students may also discover uncharted territory for their own research endeavors.

  19. Experimental and computing strategies in advanced material characterization problems

    International Nuclear Information System (INIS)

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities

  20. Experimental and computing strategies in advanced material characterization problems

    Science.gov (United States)

    Bolzon, G.

    2015-10-01

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  1. Experimental and computing strategies in advanced material characterization problems

    Energy Technology Data Exchange (ETDEWEB)

    Bolzon, G. [Department of Civil and Environmental Engineering, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milano, Italy gabriella.bolzon@polimi.it (Italy)

    2015-10-28

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  2. Scientific Advances with Aspergillus Species that Are Used for Food and Biotech Applications.

    Science.gov (United States)

    Biesebeke, Rob Te; Record, Erik

    2008-01-01

    Yeast and filamentous fungi have been used for centuries in diverse biotechnological processes. Fungal fermentation technology is traditionally used in relation to food production, such as for bread, beer, cheese, sake and soy sauce. Last century, the industrial application of yeast and filamentous fungi expanded rapidly, with excellent examples such as purified enzymes and secondary metabolites (e.g. antibiotics), which are used in a wide range of food as well as non-food industries. Research on protein and/or metabolite secretion by fungal species has focused on identifying bottlenecks in (post-) transcriptional regulation of protein production, metabolic rerouting, morphology and the transit of proteins through the secretion pathway. In past years, genome sequencing of some fungi (e.g. Aspergillus oryzae, Aspergillus niger) has been completed. The available genome sequences have enabled identification of genes and functionally important regions of the genome. This has directed research to focus on a post-genomics era in which transcriptomics, proteomics and metabolomics methodologies will help to explore the scientific relevance and industrial application of fungal genome sequences. PMID:21558706

  3. The Center for Technology for Advanced Scientific Component Software (TASCS) Lawrence Livermore National Laboratory - Site Status Update

    Energy Technology Data Exchange (ETDEWEB)

    Epperly, T W

    2008-12-03

    This report summarizes LLNL's progress for the period April through September of 2008 for the Center for Technology for Advanced Scientific Component Software (TASCS) SciDAC. The TASCS project is organized into four major thrust areas: CCA Environment (72%), Component Technology Initiatives (16%), CCA Toolkit (8%), and User and Application Outreach & Support (4%). The percentage of LLNL's effort allocation is shown in parenthesis for each thrust area. Major thrust areas are further broken down into activity areas, LLNL's effort directed to each activity is shown in Figure 1. Enhancements, Core Tools, and Usability are all part of CCA Environment, and Software Quality is part of Component Technology Initiatives. The balance of this report will cover our accomplishments in each of these activity areas.

  4. Presentation an approach for scientific workflow distribution on cloud computing data centers servers to optimization usage of computational resource

    Directory of Open Access Journals (Sweden)

    Ahmad Faraahi

    2012-01-01

    Full Text Available In a distributed system Timing and mapping the priority of tasks among processors is one of the issues attracted most of attention to itself. This issue consists of mapping a DAG with a set of tasks on a number of parallel processors and its purpose is allocating tasks to the available processors in order to satisfy the needs of priority and decency of tasks, and also to minimize the duration time of execution in total graph . Cloud computing system is one of the distributed systems which supplied a suitable condition to execute these kinds of applications, but according to the payment based on the rate of usage in this system, we should also consider reducing computation costs as the other purpose. In this article, well represent an approach to reduce the computation costs and to increase the profitability of computation power in Cloud computing system. To some possible extent, simulations show that the represented approach decreases the costs of using computation resources.

  5. 5th Conference on Advanced Mathematical and Computational Tools in Metrology

    CERN Document Server

    Cox, M G; Filipe, E; Pavese, F; Richter, D

    2001-01-01

    Advances in metrology depend on improvements in scientific and technical knowledge and in instrumentation quality, as well as on better use of advanced mathematical tools and development of new ones. In this volume, scientists from both the mathematical and the metrological fields exchange their experiences. Industrial sectors, such as instrumentation and software, will benefit from this exchange, since metrology has a high impact on the overall quality of industrial products, and applied mathematics is becoming more and more important in industrial processes.This book is of interest to people

  6. The National Center for Biomedical Ontology: Advancing Biomedicinethrough Structured Organization of Scientific Knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Rubin, Daniel L.; Lewis, Suzanna E.; Mungall, Chris J.; Misra,Sima; Westerfield, Monte; Ashburner, Michael; Sim, Ida; Chute,Christopher G.; Solbrig, Harold; Storey, Margaret-Anne; Smith, Barry; Day-Richter, John; Noy, Natalya F.; Musen, Mark A.

    2006-01-23

    The National Center for Biomedical Ontology (http://bioontology.org) is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists funded by the NIH Roadmap to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are: (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to create new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. The Center is working toward these objectives by providing tools to develop ontologies and to annotate experimental data, and by developing resources to integrate and relate existing ontologies as well as by creating repositories of biomedical data that are annotated using those ontologies. The Center is providing training workshops in ontology design, development, and usage, and is also pursuing research in ontology evaluation, quality, and use of ontologies to promote scientific discovery. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease.

  7. Recent developments in the INAA advance prediction computer program

    International Nuclear Information System (INIS)

    Full text: The INAA APCP developed at U.C. Irvine and tested experimentally with the UCI 250 kW TRIGA Mark I reactor has been very successful and useful. Commencing several years ago, copies of it have been distributed to many laboratories that requested them, quite a few of whom have put the program to good use. The BASIC-PLUS program (run earlier at UCI on a PDP-11/45) includes all (n, γ) products, for both thermal-and-epithermal-neutron fluxes. The more extensive FORTRAN-IV program (run earlier at UCI on a DEC-10) - also includes fission-spectrum and 14 MeV-neutron fast-neutron products. The most recent extension of the INAA APCP, to be discussed, is an (n, γ) program for high-intensity (e.g., 1000 MW) TRIGA pulses. At present, both the steady-state and pulsing (n, γ) programs are being rewritten for use with an IBM (or IBM-compatible) Personal Computer. Once the PC version of the steady-state (both thermal and epithermal) APCP is operational, processing of a large number of reference materials of interest to activation analysts (NGS SRM's, IAEA and USGS reference materials, etc.) will be continued and the compiled results made available. (author)

  8. Block sparse Cholesky algorithms on advanced uniprocessor computers

    Energy Technology Data Exchange (ETDEWEB)

    Ng, E.G.; Peyton, B.W.

    1991-12-01

    As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

  9. Scientific Inquiry, Digital Literacy, and Mobile Computing in Informal Learning Environments

    Science.gov (United States)

    Marty, Paul F.; Alemanne, Nicole D.; Mendenhall, Anne; Maurya, Manisha; Southerland, Sherry A.; Sampson, Victor; Douglas, Ian; Kazmer, Michelle M.; Clark, Amanda; Schellinger, Jennifer

    2013-01-01

    Understanding the connections between scientific inquiry and digital literacy in informal learning environments is essential to furthering students' critical thinking and technology skills. The Habitat Tracker project combines a standards-based curriculum focused on the nature of science with an integrated system of online and mobile…

  10. Computational thinking and thinking about computing

    OpenAIRE

    Wing, Jeannette M.

    2008-01-01

    Computational thinking will influence everyone in every field of endeavour. This vision poses a new educational challenge for our society, especially for our children. In thinking about computing, we need to be attuned to the three drivers of our field: science, technology and society. Accelerating technological advances and monumental societal demands force us to revisit the most basic scientific questions of computing.

  11. Translating scientific advances to improved outcomes for children with sickle cell disease: a timely opportunity.

    Science.gov (United States)

    Raphael, Jean L; Kavanagh, Patricia L; Wang, C Jason; Mueller, Brigitta U; Zuckerman, Barry

    2011-07-01

    Despite the recent advances made in the care of children with sickle cell disease (SCD), premature mortality, especially among older children and young adults, remains a hallmark of this disease. The lack of survival gains highlights the translational gap of implementing innovations found efficacious in the controlled trial setting into routine clinical practice. Health services research (HSR) examines the most effective ways to finance, organize, and deliver high quality care in an equitable manner. To date, HSR has been underutilized as a means to improve the outcomes for children with SCD. Emerging national priorities in health care delivery, new sources of funding, and evolving electronic data collection systems for patients with SCD have provided a unique opportunity to overcome the translational gap in pediatric SCD. The purpose of this article is to provide a comprehensive HSR agenda to create patient-specific evidence of clinical effectiveness for interventions used in the routine care setting, understand the barriers faced by clinicians to providing high quality care, assess and improve the interactions of patients with the health care system, and measure the quality of care delivered to increase survival for all children and young adults with SCD. PMID:21488152

  12. The Geography of Scientific Productivity: Scaling in U.S. Computer Science

    OpenAIRE

    Carvalho, Rui; Batty, Michael

    2006-01-01

    Here we extract the geographical addresses of authors in the Citeseer database of computer science papers. We show that the productivity of research centres in the United States follows a power-law regime, apart from the most productive centres for which we do not have enough data to reach definite conclusions. To investigate the spatial distribution of computer science research centres in the United States, we compute the two-point correlation function of the spatial point process and show t...

  13. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    International Nuclear Information System (INIS)

    In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established

  14. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    Goodarz Ahmadi

    2004-10-01

    In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established.

  15. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    Energy Technology Data Exchange (ETDEWEB)

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  16. Scientific applications of language methods

    CERN Document Server

    Martin-Vide, Carlos

    2010-01-01

    Presenting interdisciplinary research at the forefront of present advances in information technologies and their foundations, ""Scientific Applications of Language Methods"" is a multi-author volume containing pieces of work (either original research or surveys) exemplifying the application of formal language tools in several fields, including logic and discrete mathematics, natural language processing, artificial intelligence, natural computing and bioinformatics.

  17. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  18. Creating science-driven computer architecture: A new path to scientific leadership

    OpenAIRE

    McCurdy, C. William; Stevens, Rick; Simon, Horst; Kramer, William; Bailey, David; Johnston, William; Catlett, Charlie; Lusk, Rusty; Morgan, Thomas; Meza, Juan; Banda, Michael; Leighton, James; Hules, John

    2002-01-01

    This document proposes a multi-site strategy for creating a new class of computing capability for the U.S. by undertaking the research and development necessary to build supercomputers optimized for science in partnership with the American computer industry.

  19. Creating science-driven computer architecture: A new path to scientific leadership

    Energy Technology Data Exchange (ETDEWEB)

    McCurdy, C. William; Stevens, Rick; Simon, Horst; Kramer, William; Bailey, David; Johnston, William; Catlett, Charlie; Lusk, Rusty; Morgan, Thomas; Meza, Juan; Banda, Michael; Leighton, James; Hules, John

    2002-10-14

    This document proposes a multi-site strategy for creating a new class of computing capability for the U.S. by undertaking the research and development necessary to build supercomputers optimized for science in partnership with the American computer industry.

  20. NATO Advanced Study Institute on Advances in the Computer Simulations of Liquid Crystals

    CERN Document Server

    Zannoni, Claudio

    2000-01-01

    Computer simulations provide an essential set of tools for understanding the macroscopic properties of liquid crystals and of their phase transitions in terms of molecular models. While simulations of liquid crystals are based on the same general Monte Carlo and molecular dynamics techniques as are used for other fluids, they present a number of specific problems and peculiarities connected to the intrinsic properties of these mesophases. The field of computer simulations of anisotropic fluids is interdisciplinary and is evolving very rapidly. The present volume covers a variety of techniques and model systems, from lattices to hard particle and Gay-Berne to atomistic, for thermotropics, lyotropics, and some biologically interesting liquid crystals. Contributions are written by an excellent panel of international lecturers and provides a timely account of the techniques and problems in the field.

  1. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  2. Parallel scientific computing theory, algorithms, and applications of mesh based and meshless methods

    CERN Document Server

    Trobec, Roman

    2015-01-01

    This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes

  3. Advanced computational methods for the assessment of reactor core behaviour during reactivity initiated accidents. Final report

    International Nuclear Information System (INIS)

    The document at hand serves as the final report for the reactor safety research project RS1183 ''Advanced Computational Methods for the Assessment of Reactor Core Behavior During Reactivity-Initiated Accidents''. The work performed in the framework of this project was dedicated to the development, validation and application of advanced computational methods for the simulation of transients and accidents of nuclear installations. These simulation tools describe in particular the behavior of the reactor core (with respect to neutronics, thermal-hydraulics and thermal mechanics) at a very high level of detail. The overall goal of this project was the deployment of a modern nuclear computational chain which provides, besides advanced 3D tools for coupled neutronics/ thermal-hydraulics full core calculations, also appropriate tools for the generation of multi-group cross sections and Monte Carlo models for the verification of the individual calculational steps. This computational chain shall primarily be deployed for light water reactors (LWR), but should beyond that also be applicable for innovative reactor concepts. Thus, validation on computational benchmarks and critical experiments was of paramount importance. Finally, appropriate methods for uncertainty and sensitivity analysis were to be integrated into the computational framework, in order to assess and quantify the uncertainties due to insufficient knowledge of data, as well as due to methodological aspects.

  4. Multithreaded transactions in scientific computing: New versions of a computer program for kinematical calculations of RHEED intensity oscillations

    Science.gov (United States)

    Brzuszek, Marcin; Daniluk, Andrzej

    2006-11-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1

  5. Recent Advances in Computational Simulation of Macro-, Meso-, and Micro-Scale Biomimetics Related Fluid Flow Problems

    Institute of Scientific and Technical Information of China (English)

    Y. Y. Yan

    2007-01-01

    Over the last decade, computational methods have been intensively applied to a variety of scientific researches and engineering designs. Although the computational fluid dynamics (CFD) method has played a dominant role in studying and simulating transport phenomena involving fluid flow and heat and mass transfers, in recent years, other numerical methods for the simulations at meso- and micro-scales have also been actively applied to solve the physics of complex flow and fluid-interface interactions. This paper presents a review of recent advances in multi-scale computational simulation of biomimetics related fluid flow problems. The state-of-the-art numerical techniques, such as lattice Boltzmann method (LBM), molecular dynamics (MD), and conventional CFD, applied to different problems such as fish flow, electro-osmosis effect of earthworm motion, and self-cleaning hydrophobic surface, and the numerical approaches are introduced. The new challenging of modelling biomimetics problems in developing the physical conditions of self-clean hydrophobic surfaces is discussed.

  6. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    Science.gov (United States)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  7. Computer simulation: A new scientific approach to the study of language evolution

    OpenAIRE

    Cangelosi, Angelo; Parisi, Domenico

    2002-01-01

    (summary of the whole book) This volume provides a comprehensive survey of computational models and methodologies used for studying the origin and evolution of language and communication. With contributions from the most influential figures in the field, Simulating the Evolution of Language presents and summarises current computational approaches to language evolution and highlights new lines of development. Among the main discussion points are: · Analysis of emerging linguisti...

  8. Advances in multi-physics and high performance computing in support of nuclear reactor power systems modeling and simulation

    International Nuclear Information System (INIS)

    Significant advances in computational performance have occurred over the past two decades, achieved not only by the introduction of more powerful processors but the incorporation of parallelism in computer hardware at all levels. Simultaneous with these hardware and associated system software advances have been advances in modeling physical phenomena and the numerical algorithms to allow their usage in simulation. This paper presents a review of the advances in computer performance, discusses the modeling and simulation capabilities required to address the multi-physics and multi-scale phenomena applicable to a nuclear reactor core simulator, and present examples of relevant physics simulation codes' performances on high performance computers.

  9. Advanced Communication and Control for Distributed Energy Resource Integration: Phase 2 Scientific Report

    Energy Technology Data Exchange (ETDEWEB)

    BPL Global

    2008-09-30

    The objective of this research project is to demonstrate sensing, communication, information and control technologies to achieve a seamless integration of multivendor distributed energy resource (DER) units at aggregation levels that meet individual user requirements for facility operations (residential, commercial, industrial, manufacturing, etc.) and further serve as resource options for electric and natural gas utilities. The fully demonstrated DER aggregation system with embodiment of communication and control technologies will lead to real-time, interactive, customer-managed service networks to achieve greater customer value. Work on this Advanced Communication and Control Project (ACCP) consists of a two-phase approach for an integrated demonstration of communication and control technologies to achieve a seamless integration of DER units to reach progressive levels of aggregated power output. Phase I involved design and proof-of-design, and Phase II involves real-world demonstration of the Phase I design architecture. The scope of work for Phase II of this ACCP involves demonstrating the Phase I design architecture in large scale real-world settings while integrating with the operations of one or more electricity supplier feeder lines. The communication and control architectures for integrated demonstration shall encompass combinations of software and hardware components, including: sensors, data acquisition and communication systems, remote monitoring systems, metering (interval revenue, real-time), local and wide area networks, Web-based systems, smart controls, energy management/information systems with control and automation of building energy loads, and demand-response management with integration of real-time market pricing. For Phase II, BPL Global shall demonstrate the Phase I design for integrating and controlling the operation of more than 10 DER units, dispersed at various locations in one or more Independent System Operator (ISO) Control Areas, at

  10. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  11. Proceedings of the national conference on advanced communication and computing techniques

    International Nuclear Information System (INIS)

    The objective of the conference was to take stock of technological innovation aimed at the improvement and development of humanity. The main areas discussed in the conference were: advanced computer architecture, next generation networking, optical wireless communication, wireless networking, embedded systems etc. Papers relevant to INIS are indexed separately

  12. Modelling and Computing the Quality of Scientific Information on the Web of Data

    OpenAIRE

    Gamble, Matthew Philip

    2014-01-01

    The Web is being transformed into an open data commons, and is now the dominant point of access for information seeking scientists. In parallel the scientific community has been required to manage the challenges of "Big Data" - characterized by its large-scale, distributed, and diverse nature. The Web of Linked Data has emerged as a platform through which the sciences can meet this challenge, allowing them to publish and reuse data in a machine readable manner. The openness of the Web of Dat...

  13. ScalaLab and GroovyLab: Comparing Scala and Groovy for Scientific Computing

    OpenAIRE

    2015-01-01

    ScalaLab and GroovyLab are both MATLAB-like environments for the Java Virtual Machine. ScalaLab is based on the Scala programming language and GroovyLab is based on the Groovy programming language. They present similar user interfaces and functionality to the user. They also share the same set of Java scientific libraries and of native code libraries. From the programmer's point of view though, they have significant differences. This paper compares some aspects of the two environments and hig...

  14. Science gateways for distributed computing infrastructures development framework and exploitation by scientific user communities

    CERN Document Server

    Kacsuk, Péter

    2014-01-01

    The book describes the science gateway building technology developed in the SCI-BUS European project and its adoption and customization method, by which user communities, such as biologists, chemists, and astrophysicists, can build customized, domain-specific science gateways. Many aspects of the core technology are explained in detail, including its workflow capability, job submission mechanism to various grids and clouds, and its data transfer mechanisms among several distributed infrastructures. The book will be useful for scientific researchers and IT professionals engaged in the develop

  15. Advanced computational tools and methods for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    An overview is presented of advanced computational tools and methods developed recently for nuclear analyses of Fusion Technology systems such as the experimental device ITER ('International Thermonuclear Experimental Reactor') and the intense neutron source IFMIF ('International Fusion Material Irradiation Facility'). These include Monte Carlo based computational schemes for the calculation of three-dimensional shut-down dose rate distributions, methods, codes and interfaces for the use of CAD geometry models in Monte Carlo transport calculations, algorithms for Monte Carlo based sensitivity/uncertainty calculations, as well as computational techniques and data for IFMIF neutronics and activation calculations. (author)

  16. A first attempt to bring computational biology into advanced high school biology classrooms.

    Directory of Open Access Journals (Sweden)

    Suzanne Renick Gallagher

    2011-10-01

    Full Text Available Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  17. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  18. Computer-aided software understanding systems to enhance confidence of scientific codes

    International Nuclear Information System (INIS)

    A unique characteristic of nuclear waste disposal is the very long time span over which the combined engineered and natural containment system must remain effective: hundreds of thousands of years. Since there is no precedent in human history for such an endeavour, simulation with the use of computers is the only means we have of forecasting possible future outcomes quantitatively. The need for reliable models and software to make such forecasts so far into the future is obvious. One of the critical elements necessary to ensure reliability is the degree of reviewability of the computer program. Among others, there are two very important reasons for this. Firstly, if there is to be any chance at all of validating the conceptual models as implemented by the computer code, peer reviewers must be able to see and understand what the program is doing. It is all but impossible to achieve this understanding by just looking at the code due to possible unfamiliarity with the language and often due as well to the length and complexity of the code. Secondly, a thorough understanding of the code is also necessary to carry out code maintenance activities which include among others, error detection, error correction and code modification for purposes of enhancing its performance, functionality or to adapt it to a changed environment. The emerging concepts of computer-aided software understanding and reverse engineering can answer precisely these needs. This paper will discuss the role they can play in enhancing the confidence one has on computer codes and several examples will be provided. Finally a brief discussion of combining state-of-art forward engineering systems with reverse engineering systems will show how powerfully they can contribute to the overall quality assurance of a computer program. (13 refs., 7 figs.)

  19. The Impact of Misspelled Words on Automated Computer Scoring: A Case Study of Scientific Explanations

    Science.gov (United States)

    Ha, Minsu; Nehm, Ross H.

    2016-01-01

    Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a pervasive feature of human-generated text and that despite improvements, spell-check and auto-replace programs continue to be characterized by significant errors. Our study explored four research questions relating to MSW and text-based computer assessments: (1) Do English language learners (ELLs) produce equivalent magnitudes and types of spelling errors as non-ELLs? (2) To what degree do MSW impact concept-specific computer scoring rules? (3) What impact do MSW have on computer scoring accuracy? and (4) Are MSW more likely to impact false-positive or false-negative feedback to students? We found that although ELLs produced twice as many MSW as non-ELLs, MSW were relatively uncommon in our corpora. The MSW in the corpora were found to be important features of the computer scoring models. Although MSW did not significantly or meaningfully impact computer scoring efficacy across nine different computer scoring models, MSW had a greater impact on the scoring algorithms for naïve ideas than key concepts. Linguistic and concept redundancy in student responses explains the weak connection between MSW and scoring accuracy. Lastly, we found that MSW tend to have a greater impact on false-positive feedback. We discuss the implications of these findings for the development of next-generation science assessments.

  20. The Impact of Misspelled Words on Automated Computer Scoring: A Case Study of Scientific Explanations

    Science.gov (United States)

    Ha, Minsu; Nehm, Ross H.

    2016-06-01

    Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a pervasive feature of human-generated text and that despite improvements, spell-check and auto-replace programs continue to be characterized by significant errors. Our study explored four research questions relating to MSW and text-based computer assessments: (1) Do English language learners (ELLs) produce equivalent magnitudes and types of spelling errors as non-ELLs? (2) To what degree do MSW impact concept-specific computer scoring rules? (3) What impact do MSW have on computer scoring accuracy? and (4) Are MSW more likely to impact false-positive or false-negative feedback to students? We found that although ELLs produced twice as many MSW as non-ELLs, MSW were relatively uncommon in our corpora. The MSW in the corpora were found to be important features of the computer scoring models. Although MSW did not significantly or meaningfully impact computer scoring efficacy across nine different computer scoring models, MSW had a greater impact on the scoring algorithms for naïve ideas than key concepts. Linguistic and concept redundancy in student responses explains the weak connection between MSW and scoring accuracy. Lastly, we found that MSW tend to have a greater impact on false-positive feedback. We discuss the implications of these findings for the development of next-generation science assessments.