WorldWideScience

Sample records for scientific computing mcqmc98

  1. International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics

    CERN Document Server

    DEVELOPMENTS IN RELIABLE COMPUTING

    1999-01-01

    The SCAN conference, the International Symposium on Scientific Com­ puting, Computer Arithmetic and Validated Numerics, takes place bian­ nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec­ tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. I...

  2. Scientific computer simulation review

    International Nuclear Information System (INIS)

    Kaizer, Joshua S.; Heller, A. Kevin; Oberkampf, William L.

    2015-01-01

    Before the results of a scientific computer simulation are used for any purpose, it should be determined if those results can be trusted. Answering that question of trust is the domain of scientific computer simulation review. There is limited literature that focuses on simulation review, and most is specific to the review of a particular type of simulation. This work is intended to provide a foundation for a common understanding of simulation review. This is accomplished through three contributions. First, scientific computer simulation review is formally defined. This definition identifies the scope of simulation review and provides the boundaries of the review process. Second, maturity assessment theory is developed. This development clarifies the concepts of maturity criteria, maturity assessment sets, and maturity assessment frameworks, which are essential for performing simulation review. Finally, simulation review is described as the application of a maturity assessment framework. This is illustrated through evaluating a simulation review performed by the U.S. Nuclear Regulatory Commission. In making these contributions, this work provides a means for a more objective assessment of a simulation’s trustworthiness and takes the next step in establishing scientific computer simulation review as its own field. - Highlights: • We define scientific computer simulation review. • We develop maturity assessment theory. • We formally define a maturity assessment framework. • We describe simulation review as the application of a maturity framework. • We provide an example of a simulation review using a maturity framework

  3. Pascal-SC a computer language for scientific computation

    CERN Document Server

    Bohlender, Gerd; von Gudenberg, Jürgen Wolff; Rheinboldt, Werner; Siewiorek, Daniel

    1987-01-01

    Perspectives in Computing, Vol. 17: Pascal-SC: A Computer Language for Scientific Computation focuses on the application of Pascal-SC, a programming language developed as an extension of standard Pascal, in scientific computation. The publication first elaborates on the introduction to Pascal-SC, a review of standard Pascal, and real floating-point arithmetic. Discussions focus on optimal scalar product, standard functions, real expressions, program structure, simple extensions, real floating-point arithmetic, vector and matrix arithmetic, and dynamic arrays. The text then examines functions a

  4. Problems of Chernobyl. Materials of International scientific and practical conference 'Shelter-98'

    International Nuclear Information System (INIS)

    Klyuchnikov, O.O.

    1999-01-01

    These transactions contain materials of International Scientific and Practical Conference 'Shelter-98', which was held 27-30 November 1998 in Slavutich. They describe the results of the research work of the specialists from Ukraine, neighborhood and far foreign counties. The results, targeted at solving the problems of converting the Shelter Object into oncologically safe state

  5. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  6. Practical scientific computing

    CERN Document Server

    Muhammad, A

    2011-01-01

    Scientific computing is about developing mathematical models, numerical methods and computer implementations to study and solve real problems in science, engineering, business and even social sciences. Mathematical modelling requires deep understanding of classical numerical methods. This essential guide provides the reader with sufficient foundations in these areas to venture into more advanced texts. The first section of the book presents numEclipse, an open source tool for numerical computing based on the notion of MATLAB®. numEclipse is implemented as a plug-in for Eclipse, a leading integ

  7. Problems of Chernobyl. Materials of International scientific and practical conference 'Shelter-98'; Problemi Chornobilya. Materyiali Myizhnarodnoyi Naukovo-Praktichnoyi Konferentsyiyi 'Ukrittya-98'

    Energy Technology Data Exchange (ETDEWEB)

    Klyuchnikov, O O [eds.

    1999-07-01

    These transactions contain materials of International Scientific and Practical Conference 'Shelter-98', which was held 27-30 November 1998 in Slavutich. They describe the results of the research work of the specialists from Ukraine, neighborhood and far foreign counties. The results, targeted at solving the problems of converting the Shelter Object into oncologically safe state.

  8. Exploring HPCS languages in scientific computing

    International Nuclear Information System (INIS)

    Barrett, R F; Alam, S R; Almeida, V F d; Bernholdt, D E; Elwasif, W R; Kuehn, J A; Poole, S W; Shet, A G

    2008-01-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else

  9. Exploring HPCS languages in scientific computing

    Science.gov (United States)

    Barrett, R. F.; Alam, S. R.; Almeida, V. F. d.; Bernholdt, D. E.; Elwasif, W. R.; Kuehn, J. A.; Poole, S. W.; Shet, A. G.

    2008-07-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

  10. Computers and Computation. Readings from Scientific American.

    Science.gov (United States)

    Fenichel, Robert R.; Weizenbaum, Joseph

    A collection of articles from "Scientific American" magazine has been put together at this time because the current period in computer science is one of consolidation rather than innovation. A few years ago, computer science was moving so swiftly that even the professional journals were more archival than informative; but today it is…

  11. Scientific computing

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  12. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Maynard, Robert [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-27

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respective features into a new visualization toolkit called VTK-m.

  13. RXY/DRXY-a postprocessing graphical system for scientific computation

    International Nuclear Information System (INIS)

    Jin Qijie

    1990-01-01

    Scientific computing require computer graphical function for its visualization. The developing objects and functions of a postprocessing graphical system for scientific computation are described, and also briefly described its implementation

  14. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  15. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  16. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  17. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  18. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  19. Scientific computing an introduction using Maple and Matlab

    CERN Document Server

    Gander, Walter; Kwok, Felix

    2014-01-01

    Scientific computing is the study of how to use computers effectively to solve problems that arise from the mathematical modeling of phenomena in science and engineering. It is based on mathematics, numerical and symbolic/algebraic computations and visualization. This book serves as an introduction to both the theory and practice of scientific computing, with each chapter presenting the basic algorithms that serve as the workhorses of many scientific codes; we explain both the theory behind these algorithms and how they must be implemented in order to work reliably in finite-precision arithmetic. The book includes many programs written in Matlab and Maple – Maple is often used to derive numerical algorithms, whereas Matlab is used to implement them. The theory is developed in such a way that students can learn by themselves as they work through the text. Each chapter contains numerous examples and problems to help readers understand the material “hands-on”.

  20. Introduction to the LaRC central scientific computing complex

    Science.gov (United States)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  1. Scientific Discovery through Advanced Computing in Plasma Science

    Science.gov (United States)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations

  2. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  3. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  4. Scientific applications of symbolic computation

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1976-02-01

    The use of symbolic computation systems for problem solving in scientific research is reviewed. The nature of the field is described, and particular examples are considered from celestial mechanics, quantum electrodynamics and general relativity. Symbolic integration and some more recent applications of algebra systems are also discussed [fr

  5. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  6. HPCToolkit: performance tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M [Department of Computer Science, Rice University, Houston, TX 77005 (United States)

    2008-07-15

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei.

  7. HPCToolkit: performance tools for scientific computing

    International Nuclear Information System (INIS)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M

    2008-01-01

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei

  8. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  9. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  10. Scientific Computing in Electrical Engineering

    CERN Document Server

    Amrhein, Wolfgang; Zulehner, Walter

    2018-01-01

    This collection of selected papers presented at the 11th International Conference on Scientific Computing in Electrical Engineering (SCEE), held in St. Wolfgang, Austria, in 2016, showcases the state of the art in SCEE. The aim of the SCEE 2016 conference was to bring together scientists from academia and industry, mathematicians, electrical engineers, computer scientists, and physicists, and to promote intensive discussions on industrially relevant mathematical problems, with an emphasis on the modeling and numerical simulation of electronic circuits and devices, electromagnetic fields, and coupled problems. The focus in methodology was on model order reduction and uncertainty quantification. This extensive reference work is divided into six parts: Computational Electromagnetics, Circuit and Device Modeling and Simulation, Coupled Problems and Multi‐Scale Approaches in Space and Time, Mathematical and Computational Methods Including Uncertainty Quantification, Model Order Reduction, and Industrial Applicat...

  11. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  12. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  13. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  14. Software Defects, Scientific Computation and the Scientific Method

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...

  15. Software Engineering for Scientific Computer Simulations

    Science.gov (United States)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  16. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  17. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  18. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  19. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    Energy Technology Data Exchange (ETDEWEB)

    Hey, Tony [eScience Institute, University of Washington; Agarwal, Deborah [Lawrence Berkeley National Laboratory; Borgman, Christine [University of California, Los Angeles; Cartaro, Concetta [SLAC National Accelerator Laboratory; Crivelli, Silvia [Lawrence Berkeley National Laboratory; Van Dam, Kerstin Kleese [Pacific Northwest National Laboratory; Luce, Richard [University of Oklahoma; Arjun, Shankar [CADES, Oak Ridge National Laboratory; Trefethen, Anne [University of Oxford; Wade, Alex [Microsoft Research, Microsoft Corporation; Williams, Dean [Lawrence Livermore National Laboratory

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  20. The Potential of the Cell Processor for Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  1. The Y2K program for scientific-analysis computer programs at AECL

    International Nuclear Information System (INIS)

    Popovic, J.; Gaver, C.; Chapman, D.

    1999-01-01

    The evaluation of scientific-analysis computer programs for year-2000 compliance is part of AECL' s year-2000 (Y2K) initiative, which addresses both the infrastructure systems at AECL and AECL's products and services. This paper describes the Y2K-compliance program for scientific-analysis computer codes. This program involves the integrated evaluation of the computer hardware, middleware, and third-party software in addition to the scientific codes developed in-house. The project involves several steps: the assessment of the scientific computer programs for Y2K compliance, performing any required corrective actions, porting the programs to Y2K-compliant platforms, and verification of the programs after porting. Some programs or program versions, deemed no longer required in the year 2000 and beyond, will be retired and archived. (author)

  2. The Y2K program for scientific-analysis computer programs at AECL

    International Nuclear Information System (INIS)

    Popovic, J.; Gaver, C.; Chapman, D.

    1999-01-01

    The evaluation of scientific analysis computer programs for year-2000 compliance is part of AECL's year-2000 (Y2K) initiative, which addresses both the infrastructure systems at AECL and AECL's products and services. This paper describes the Y2K-compliance program for scientific-analysis computer codes. This program involves the integrated evaluation of the computer hardware, middleware, and third-party software in addition to the scientific codes developed in-house. The project involves several steps: the assessment of the scientific computer programs for Y2K compliance, performing any required corrective actions, porting the programs to Y2K-compliant platforms, and verification of the programs after porting. Some programs or program versions, deemed no longer required in the year 2000 and beyond, will be retired and archived. (author)

  3. The 12-th INS scientific computational programs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This issue is the collection of the paper on INS scientific computational programs. Separate abstracts were presented for 3 of the papers in this report. The remaining 5 were considered outside the subject scope of INIS. (J.P.N.)

  4. Educational NASA Computational and Scientific Studies (enCOMPASS)

    Science.gov (United States)

    Memarsadeghi, Nargess

    2013-01-01

    Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and

  5. Scientific Computing and Apple's Intel Transition

    CERN Document Server

    CERN. Geneva

    2006-01-01

    Intel's published processor roadmap and how it may affect the future of personal and scientific computing About the speaker: Eric Albert is Senior Software Engineer in Apple's Core Technologies group. During Mac OS X's transition to Intel processors he has worked on almost every part of the operating system, from the OS kernel and compiler tools to appli...

  6. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  7. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  8. Learning SciPy for numerical and scientific computing

    CERN Document Server

    Silva

    2013-01-01

    A step-by-step practical tutorial with plenty of examples on research-based problems from various areas of science, that prove how simple, yet effective, it is to provide solutions based on SciPy. This book is targeted at anyone with basic knowledge of Python, a somewhat advanced command of mathematics/physics, and an interest in engineering or scientific applications---this is broadly what we refer to as scientific computing.This book will be of critical importance to programmers and scientists who have basic Python knowledge and would like to be able to do scientific and numerical computatio

  9. Scientific Computing in the CH Programming Language

    Directory of Open Access Journals (Sweden)

    Harry H. Cheng

    1993-01-01

    Full Text Available We have developed a general-purpose block-structured interpretive programming Ianguage. The syntax and semantics of this language called CH are similar to C. CH retains most features of C from the scientific computing point of view. In this paper, the extension of C to CH for numerical computation of real numbers will be described. Metanumbers of −0.0, 0.0, Inf, −Inf, and NaN are introduced in CH. Through these metanumbers, the power of the IEEE 754 arithmetic standard is easily available to the programmer. These metanumbers are extended to commonly used mathematical functions in the spirit of the IEEE 754 standard and ANSI C. The definitions for manipulation of these metanumbers in I/O; arithmetic, relational, and logic operations; and built-in polymorphic mathematical functions are defined. The capabilities of bitwise, assignment, address and indirection, increment and decrement, as well as type conversion operations in ANSI C are extended in CH. In this paper, mainly new linguistic features of CH in comparison to C will be described. Example programs programmed in CH with metanumbers and polymorphic mathematical functions will demonstrate capabilities of CH in scientific computing.

  10. Research initiatives for plug-and-play scientific computing

    International Nuclear Information System (INIS)

    McInnes, Lois Curfman; Dahlgren, Tamara; Nieplocha, Jarek; Bernholdt, David; Allan, Ben; Armstrong, Rob; Chavarria, Daniel; Elwasif, Wael; Gorton, Ian; Kenny, Joe; Krishan, Manoj; Malony, Allen; Norris, Boyana; Ray, Jaideep; Shende, Sameer

    2007-01-01

    This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance

  11. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  12. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  13. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  14. Computer network access to scientific information systems for minority universities

    Science.gov (United States)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  15. JINR CLOUD SERVICE FOR SCIENTIFIC AND ENGINEERING COMPUTATIONS

    Directory of Open Access Journals (Sweden)

    Nikita A. Balashov

    2018-03-01

    Full Text Available Pretty often small research scientific groups do not have access to powerful enough computational resources required for their research work to be productive. Global computational infrastructures used by large scientific collaborations can be challenging for small research teams because of bureaucracy overhead as well as usage complexity of underlying tools. Some researchers buy a set of powerful servers to cover their own needs in computational resources. A drawback of such approach is a necessity to take care about proper hosting environment for these hardware and maintenance which requires a certain level of expertise. Moreover a lot of time such resources may be underutilized because а researcher needs to spend a certain amount of time to prepare computations and to analyze results as well as he doesn’t always need all resources of modern multi-core CPUs servers. The JINR cloud team developed a service which provides an access for scientists of small research groups from JINR and its Member State organizations to computational resources via problem-oriented (i.e. application-specific web-interface. It allows a scientist to focus on his research domain by interacting with the service in a convenient way via browser and abstracting away from underlying infrastructure as well as its maintenance. A user just sets a required values for his job via web-interface and specify a location for uploading a result. The computational workloads are done on the virtual machines deployed in the JINR cloud infrastructure.

  16. Ontology-Driven Discovery of Scientific Computational Entities

    Science.gov (United States)

    Brazier, Pearl W.

    2010-01-01

    Many geoscientists use modern computational resources, such as software applications, Web services, scientific workflows and datasets that are readily available on the Internet, to support their research and many common tasks. These resources are often shared via human contact and sometimes stored in data portals; however, they are not necessarily…

  17. Initial explorations of ARM processors for scientific computing

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad

    2014-01-01

    Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software

  18. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  19. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  20. Scientific computing with MATLAB and Octave

    CERN Document Server

    Quarteroni, Alfio; Gervasio, Paola

    2014-01-01

    This textbook is an introduction to Scientific Computing, in which several numerical methods for the computer-based solution of certain classes of mathematical problems are illustrated. The authors show how to compute the zeros, the extrema, and the integrals of continuous functions, solve linear systems, approximate functions using polynomials and construct accurate approximations for the solution of ordinary and partial differential equations. To make the format concrete and appealing, the programming environments Matlab and Octave are adopted as faithful companions. The book contains the solutions to several problems posed in exercises and examples, often originating from important applications. At the end of each chapter, a specific section is devoted to subjects which were not addressed in the book and contains bibliographical references for a more comprehensive treatment of the material. From the review: ".... This carefully written textbook, the third English edition, contains substantial new developme...

  1. Proceedings of the 8. Mediterranean Conference on Medical and Biological Engineering and Computing (Medicon `98)

    Energy Technology Data Exchange (ETDEWEB)

    Christofides, Stelios; Pattichis, Constantinos; Schizas, Christos; Keravnou-Papailiou, Elpida; Kaplanis, Prodromos; Spyros, Spyrou; Christodoulides, George; Theodoulou, Yiannis [eds.

    1999-12-31

    Medicon `98 is the eighth in the series of regional meetings of the International Federation of Medical and Biological Engineering (IFMBE) in the Mediterranean. The goal of Medicon `98 is to provide updated information on the state of the art on medical and biological engineering and computing. Medicon `98 was held in Lemesos, Cyprus, between 14-17 June, 1998. The full papers of the proceedings were published on CD and consisted of 190 invited and submitted papers. A book of abstracts was also published in paper form and was available to all the participants. Twenty seven papers fall within the scope of INIS and are dealing with Nuclear Medicine,Computerized Tomography, Radiology, Radiotherapy, Magnetic Resonance Imaging and Personnel Dosimetry (eds).

  2. Proceedings of the 8. Mediterranean Conference on Medical and Biological Engineering and Computing (Medicon '98)

    International Nuclear Information System (INIS)

    Christofides, Stelios; Pattichis, Constantinos; Schizas, Christos; Keravnou-Papailiou, Elpida; Kaplanis, Prodromos; Spyros, Spyrou; Christodoulides, George; Theodoulou, Yiannis

    1998-01-01

    Medicon '98 is the eighth in the series of regional meetings of the International Federation of Medical and Biological Engineering (IFMBE) in the Mediterranean. The goal of Medicon '98 is to provide updated information on the state of the art on medical and biological engineering and computing. Medicon '98 was held in Lemesos, Cyprus, between 14-17 June, 1998. The full papers of the proceedings were published on CD and consisted of 190 invited and submitted papers. A book of abstracts was also published in paper form and was available to all the participants. Twenty seven papers fall within the scope of INIS and are dealing with Nuclear Medicine,Computerized Tomography, Radiology, Radiotherapy, Magnetic Resonance Imaging and Personnel Dosimetry (eds)

  3. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  4. Data management, code deployment, and scientific visualization to enhance scientific discovery in fusion research through advanced computing

    International Nuclear Information System (INIS)

    Schissel, D.P.; Finkelstein, A.; Foster, I.T.; Fredian, T.W.; Greenwald, M.J.; Hansen, C.D.; Johnson, C.R.; Keahey, K.; Klasky, S.A.; Li, K.; McCune, D.C.; Peng, Q.; Stevens, R.; Thompson, M.R.

    2002-01-01

    The long-term vision of the Fusion Collaboratory described in this paper is to transform fusion research and accelerate scientific understanding and innovation so as to revolutionize the design of a fusion energy source. The Collaboratory will create and deploy collaborative software tools that will enable more efficient utilization of existing experimental facilities and more effective integration of experiment, theory, and modeling. The computer science research necessary to create the Collaboratory is centered on three activities: security, remote and distributed computing, and scientific visualization. It is anticipated that the presently envisioned Fusion Collaboratory software tools will require 3 years to complete

  5. Technologies for Large Data Management in Scientific Computing

    CERN Document Server

    Pace, A

    2014-01-01

    In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focusses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.

  6. Scholarly literature and the press: scientific impact and social perception of physics computing

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2014-01-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the scientific impact and social perception of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing would be beneficial to the high energy physics community.

  7. Scientific computing in electrical engineering SCEE 2010

    Energy Technology Data Exchange (ETDEWEB)

    Michielsen, Bastiaan [Office National d' Etudes et de Recherches Aerospatiales (ONERA), 31 - Toulouse (France); Poirier, Jean-Rene (eds.) [LAPLACE-ENSEEIHT, Toulouse (France)

    2012-07-01

    Selected from papers presented at the 8th Scientific Computation in Electrical Engineering conference in Toulouse in 2010, the contributions to this volume cover every angle of numerically modelling electronic and electrical systems, including computational electromagnetics, circuit theory and simulation and device modelling. On computational electromagnetics, the chapters examine cutting-edge material ranging from low-frequency electrical machine modelling problems to issues in high-frequency scattering. Regarding circuit theory and simulation, the book details the most advanced techniques for modelling networks with many thousands of components. Modelling devices at microscopic levels is covered by a number of fundamental mathematical physics papers, while numerous papers on model order reduction help engineers and systems designers to bring their modelling of industrial-scale systems within the reach of present-day computational power. Complementing these more specific papers, the volume also contains a selection of mathematical methods which can be used in any application domain. (orig.)

  8. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sewell, Christopher [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Meredith, Jeremy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  9. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States)

    2017-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  10. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D.; Sewell, Christopher (LANL); Childs, Hank (U of Oregon); Ma, Kwan-Liu (UC Davis); Geveci, Berk (Kitware); Meredith, Jeremy (ORNL)

    2016-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  11. Emerging Nanophotonic Applications Explored with Advanced Scientific Parallel Computing

    Science.gov (United States)

    Meng, Xiang

    The domain of nanoscale optical science and technology is a combination of the classical world of electromagnetics and the quantum mechanical regime of atoms and molecules. Recent advancements in fabrication technology allows the optical structures to be scaled down to nanoscale size or even to the atomic level, which are far smaller than the wavelength they are designed for. These nanostructures can have unique, controllable, and tunable optical properties and their interactions with quantum materials can have important near-field and far-field optical response. Undoubtedly, these optical properties can have many important applications, ranging from the efficient and tunable light sources, detectors, filters, modulators, high-speed all-optical switches; to the next-generation classical and quantum computation, and biophotonic medical sensors. This emerging research of nanoscience, known as nanophotonics, is a highly interdisciplinary field requiring expertise in materials science, physics, electrical engineering, and scientific computing, modeling and simulation. It has also become an important research field for investigating the science and engineering of light-matter interactions that take place on wavelength and subwavelength scales where the nature of the nanostructured matter controls the interactions. In addition, the fast advancements in the computing capabilities, such as parallel computing, also become as a critical element for investigating advanced nanophotonic devices. This role has taken on even greater urgency with the scale-down of device dimensions, and the design for these devices require extensive memory and extremely long core hours. Thus distributed computing platforms associated with parallel computing are required for faster designs processes. Scientific parallel computing constructs mathematical models and quantitative analysis techniques, and uses the computing machines to analyze and solve otherwise intractable scientific challenges. In

  12. Scientific computing and algorithms in industrial simulations projects and products of Fraunhofer SCAI

    CERN Document Server

    Schüller, Anton; Schweitzer, Marc

    2017-01-01

    The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art metho...

  13. Domain analysis of computational science - Fifty years of a scientific computing group

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, M.

    2010-02-23

    I employed bibliometric- and historical-methods to study the domain of the Scientific Computing group at Brookhaven National Laboratory (BNL) for an extended period of fifty years, from 1958 to 2007. I noted and confirmed the growing emergence of interdisciplinarity within the group. I also identified a strong, consistent mathematics and physics orientation within it.

  14. New tools to aid in scientific computing and visualization

    International Nuclear Information System (INIS)

    Wallace, M.G.; Christian-Frear, T.L.

    1992-01-01

    In this paper, two computer programs are described which aid in the pre- and post-processing of computer generated data. CoMeT (Computational Mechanics Toolkit) is a customizable, interactive, graphical, menu-driven program that provides the analyst with a consistent user-friendly interface to analysis codes. Trans Vol (Transparent Volume Visualization) is a specialized tool for the scientific three-dimensional visualization of complex solids by the technique of volume rendering. Both tools are described in basic detail along with an application example concerning the simulation of contaminant migration from an underground nuclear repository

  15. Scholarly literature and the press: scientific impact and social perception of physics computing

    International Nuclear Information System (INIS)

    Pia, M G; Basaglia, T; Bell, Z W; Dressendorfer, P V

    2014-01-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the relationship between the scientific impact and the social perception of HEP physics research versus that of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing via press releases from the major HEP laboratories would be beneficial to the high energy physics community.

  16. Data-flow oriented visual programming libraries for scientific computing

    NARCIS (Netherlands)

    Maubach, J.M.L.; Drenth, W.D.; Sloot, P.M.A.

    2002-01-01

    The growing release of scientific computational software does not seem to aid the implementation of complex numerical algorithms. Released libraries lack a common standard interface with regard to for instance finite element, difference or volume discretizations. And, libraries written in standard

  17. National Energy Research Scientific Computing Center 2007 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  18. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    Science.gov (United States)

    Strayer, Michael

    2005-01-01

    Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations

  19. The AUSGeoid98 geoid model of Australia: data treatment, computations and comparisons with GPS-levelling data

    DEFF Research Database (Denmark)

    Featherstone, W.E.; Kirby, J.F.; Kearsley, A.H.W.

    2001-01-01

    The AUSGeoid98 gravimetric geoid model of Australia has been computed using data from the EGM96 global geopotential model, the 1996 release of the Australian gravity database, a nationwide digital elevation model, and satellite altimeter-derived marine gravity anomalies. The geoid heights are on ...

  20. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    Energy Technology Data Exchange (ETDEWEB)

    Reed, Daniel [University of Iowa; Berzins, Martin [University of Utah; Pennington, Robert; Sarkar, Vivek [Rice University; Taylor, Valerie [Texas A& M University

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  1. Scientific computing vol II - eigenvalues and optimization

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the second of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses more advanced topics than volume one, and is largely not a prerequisite for volume three. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 49 examples, 110 exercises, 66 algorithms, 24 interactive JavaScript programs, 77 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in LAPACK, GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either upper level undergraduate...

  2. Scientific computing vol III - approximation and integration

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  3. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    International Nuclear Information System (INIS)

    Nam, H; Stoitsov, M; Nazarewicz, W; Hagen, G; Kortelainen, M; Pei, J C; Bulgac, A; Maris, P; Vary, J P; Roche, K J; Schunck, N; Thompson, I; Wild, S M

    2012-01-01

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multi-disciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.

  4. Scientific Grand Challenges: Crosscutting Technologies for Computing at the Exascale - February 2-4, 2010, Washington, D.C.

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2011-02-06

    The goal of the "Scientific Grand Challenges - Crosscutting Technologies for Computing at the Exascale" workshop in February 2010, jointly sponsored by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research and the National Nuclear Security Administration, was to identify the elements of a research and development agenda that will address these challenges and create a comprehensive exascale computing environment. This exascale computing environment will enable the science applications identified in the eight previously held Scientific Grand Challenges Workshop Series.

  5. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  6. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    Science.gov (United States)

    2017-04-13

    AFRL-AFOSR-UK-TR-2017-0029 Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems ...2012, “ Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems .” 2. The objective...2012 - 01/25/2015 4. TITLE AND SUBTITLE Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous

  7. Implementation of a Curriculum-Integrated Computer Game for Introducing Scientific Argumentation

    Science.gov (United States)

    Wallon, Robert C.; Jasti, Chandana; Lauren, Hillary Z. G.; Hug, Barbara

    2017-11-01

    Argumentation has been emphasized in recent US science education reform efforts (NGSS Lead States 2013; NRC 2012), and while existing studies have investigated approaches to introducing and supporting argumentation (e.g., McNeill and Krajcik in Journal of Research in Science Teaching, 45(1), 53-78, 2008; Kang et al. in Science Education, 98(4), 674-704, 2014), few studies have investigated how game-based approaches may be used to introduce argumentation to students. In this paper, we report findings from a design-based study of a teacher's use of a computer game intended to introduce the claim, evidence, reasoning (CER) framework (McNeill and Krajcik 2012) for scientific argumentation. We studied the implementation of the game over two iterations of development in a high school biology teacher's classes. The results of this study include aspects of enactment of the activities and student argument scores. We found the teacher used the game in aspects of explicit instruction of argumentation during both iterations, although the ways in which the game was used differed. Also, students' scores in the second iteration were significantly higher than the first iteration. These findings support the notion that students can learn argumentation through a game, especially when used in conjunction with explicit instruction and support in student materials. These findings also highlight the importance of analyzing classroom implementation in studies of game-based learning.

  8. Computer simulations and the changing face of scientific experimentation

    CERN Document Server

    Duran, Juan M

    2013-01-01

    Computer simulations have become a central tool for scientific practice. Their use has replaced, in many cases, standard experimental procedures. This goes without mentioning cases where the target system is empirical but there are no techniques for direct manipulation of the system, such as astronomical observation. To these cases, computer simulations have proved to be of central importance. The question about their use and implementation, therefore, is not only a technical one but represents a challenge for the humanities as well. In this volume, scientists, historians, and philosophers joi

  9. PARA'04, State-of-the-art in scientific computing

    DEFF Research Database (Denmark)

    Madsen, Kaj; Wasniewski, Jerzy

    This meeting in the series, the PARA'04 Workshop with the title ``State of the Art in Scientific Computing'', was held in Lyngby, Denmark, June 20-23, 2004. The PARA'04 Workshop was organized by Jack Dongarra from the University of Tennessee and Oak Ridge National Laboratory, and Kaj Madsen and J...

  10. Elastic Scheduling of Scientific Workflows under Deadline Constraints in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Nazia Anwar

    2018-01-01

    Full Text Available Scientific workflow applications are collections of several structured activities and fine-grained computational tasks. Scientific workflow scheduling in cloud computing is a challenging research topic due to its distinctive features. In cloud environments, it has become critical to perform efficient task scheduling resulting in reduced scheduling overhead, minimized cost and maximized resource utilization while still meeting the user-specified overall deadline. This paper proposes a strategy, Dynamic Scheduling of Bag of Tasks based workflows (DSB, for scheduling scientific workflows with the aim to minimize financial cost of leasing Virtual Machines (VMs under a user-defined deadline constraint. The proposed model groups the workflow into Bag of Tasks (BoTs based on data dependency and priority constraints and thereafter optimizes the allocation and scheduling of BoTs on elastic, heterogeneous and dynamically provisioned cloud resources called VMs in order to attain the proposed method’s objectives. The proposed approach considers pay-as-you-go Infrastructure as a Service (IaaS clouds having inherent features such as elasticity, abundance, heterogeneity and VM provisioning delays. A trace-based simulation using benchmark scientific workflows representing real world applications, demonstrates a significant reduction in workflow computation cost while the workflow deadline is met. The results validate that the proposed model produces better success rates to meet deadlines and cost efficiencies in comparison to adapted state-of-the-art algorithms for similar problems.

  11. Laboratory directed research and development FY98 annual report; TOPICAL

    International Nuclear Information System (INIS)

    Al-Ayat, R; Holzrichter, J

    1999-01-01

    In 1984, Congress and the Department of Energy (DOE) established the Laboratory Directed Research and Development (LDRD) Program to enable the director of a national laboratory to foster and expedite innovative research and development (R and D) in mission areas. The Lawrence Livermore National Laboratory (LLNL) continually examines these mission areas through strategic planning and shapes the LDRD Program to meet its long-term vision. The goal of the LDRD Program is to spur development of new scientific and technical capabilities that enable LLNL to respond to the challenges within its evolving mission areas. In addition, the LDRD Program provides LLNL with the flexibility to nurture and enrich essential scientific and technical competencies and enables the Laboratory to attract the most qualified scientists and engineers. The FY98 LDRD portfolio described in this annual report has been carefully structured to continue the tradition of vigorously supporting DOE and LLNL strategic vision and evolving mission areas. The projects selected for LDRD funding undergo stringent review and selection processes, which emphasize strategic relevance and require technical peer reviews of proposals by external and internal experts. These FY98 projects emphasize the Laboratory's national security needs: stewardship of the U.S. nuclear weapons stockpile, responsibility for the counter- and nonproliferation of weapons of mass destruction, development of high-performance computing, and support of DOE environmental research and waste management programs

  12. ScalaLab and GroovyLab: Comparing Scala and Groovy for Scientific Computing

    Directory of Open Access Journals (Sweden)

    Stergios Papadimitriou

    2015-01-01

    Full Text Available ScalaLab and GroovyLab are both MATLAB-like environments for the Java Virtual Machine. ScalaLab is based on the Scala programming language and GroovyLab is based on the Groovy programming language. They present similar user interfaces and functionality to the user. They also share the same set of Java scientific libraries and of native code libraries. From the programmer's point of view though, they have significant differences. This paper compares some aspects of the two environments and highlights some of the strengths and weaknesses of Scala versus Groovy for scientific computing. The discussion also examines some aspects of the dilemma of using dynamic typing versus static typing for scientific programming. The performance of the Java platform is continuously improved at a fast pace. Today Java can effectively support demanding high-performance computing and scales well on multicore platforms. Thus, both systems can challenge the performance of the traditional C/C++/Fortran scientific code with an easier to use and more productive programming environment.

  13. The Observation of Bahasa Indonesia Official Computer Terms Implementation in Scientific Publication

    Science.gov (United States)

    Gunawan, D.; Amalia, A.; Lydia, M. S.; Muthaqin, M. I.

    2018-03-01

    The government of the Republic of Indonesia had issued a regulation to substitute computer terms in foreign language that have been used earlier into official computer terms in Bahasa Indonesia. This regulation was stipulated in Presidential Decree No. 2 of 2001 concerning the introduction of official computer terms in Bahasa Indonesia (known as Senarai Padanan Istilah/SPI). After sixteen years, people of Indonesia, particularly for academics, should have implemented the official computer terms in their official publications. This observation is conducted to discover the implementation of official computer terms usage in scientific publications which are written in Bahasa Indonesia. The data source used in this observation are the publications by the academics, particularly in computer science field. The method used in the observation is divided into four stages. The first stage is metadata harvesting by using Open Archive Initiative - Protocol for Metadata Harvesting (OAI-PMH). Second, converting the harvested document (in pdf format) to plain text. The third stage is text-preprocessing as the preparation of string matching. Then the final stage is searching the official computer terms based on 629 SPI terms by using Boyer-Moore algorithm. We observed that there are 240,781 foreign computer terms in 1,156 scientific publications from six universities. This result shows that the foreign computer terms are still widely used by the academics.

  14. From Mars to Minerva: The origins of scientific computing in the AEC labs

    Energy Technology Data Exchange (ETDEWEB)

    Seidel, R.W. [ERA Land Grant Professor of the History of Technology]|[Charles Babbage Institute, University of Minnesota, Minneapolis, Minnesota (United States)

    1996-10-01

    Although the AEC laboratories are renowned for the development of nuclear weapons, their largess in promoting scientific computing also had a profound effect on scientific and technological development in the second half of the 20th century. {copyright} {ital 1996 American Institute of Physics.}

  15. OPENING REMARKS: Scientific Discovery through Advanced Computing

    Science.gov (United States)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  16. Large-scale computation at PSI scientific achievements and future requirements

    International Nuclear Information System (INIS)

    Adelmann, A.; Markushin, V.

    2008-11-01

    Computational modelling and simulation are among the disciplines that have seen the most dramatic growth in capabilities in the 2Oth Century. Within the past two decades, scientific computing has become an important contributor to all scientific research programs. Computational modelling and simulation are particularly indispensable for solving research problems that are unsolvable by traditional theoretical and experimental approaches, hazardous to study, or time consuming or expensive to solve by traditional means. Many such research areas are found in PSI's research portfolio. Advances in computing technologies (including hardware and software) during the past decade have set the stage for a major step forward in modelling and simulation. We have now arrived at a situation where we have a number of otherwise unsolvable problems, where simulations are as complex as the systems under study. In 2008 the High-Performance Computing (HPC) community entered the petascale area with the heterogeneous Opteron/Cell machine, called Road Runner built by IBM for the Los Alamos National Laboratory. We are on the brink of a time where the availability of many hundreds of thousands of cores will open up new challenging possibilities in physics, algorithms (numerical mathematics) and computer science. However, to deliver on this promise, it is not enough to provide 'peak' performance in terms of peta-flops, the maximum theoretical speed a computer can attain. Most important, this must be translated into corresponding increase in the capabilities of scientific codes. This is a daunting problem that can only be solved by increasing investment in hardware, in the accompanying system software that enables the reliable use of high-end computers, in scientific competence i.e. the mathematical (parallel) algorithms that are the basis of the codes, and education. In the case of Switzerland, the white paper 'Swiss National Strategic Plan for High Performance Computing and Networking

  17. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    Computational modelling and simulation are among the disciplines that have seen the most dramatic growth in capabilities in the 2Oth Century. Within the past two decades, scientific computing has become an important contributor to all scientific research programs. Computational modelling and simulation are particularly indispensable for solving research problems that are unsolvable by traditional theoretical and experimental approaches, hazardous to study, or time consuming or expensive to solve by traditional means. Many such research areas are found in PSI's research portfolio. Advances in computing technologies (including hardware and software) during the past decade have set the stage for a major step forward in modelling and simulation. We have now arrived at a situation where we have a number of otherwise unsolvable problems, where simulations are as complex as the systems under study. In 2008 the High-Performance Computing (HPC) community entered the petascale area with the heterogeneous Opteron/Cell machine, called Road Runner built by IBM for the Los Alamos National Laboratory. We are on the brink of a time where the availability of many hundreds of thousands of cores will open up new challenging possibilities in physics, algorithms (numerical mathematics) and computer science. However, to deliver on this promise, it is not enough to provide 'peak' performance in terms of peta-flops, the maximum theoretical speed a computer can attain. Most important, this must be translated into corresponding increase in the capabilities of scientific codes. This is a daunting problem that can only be solved by increasing investment in hardware, in the accompanying system software that enables the reliable use of high-end computers, in scientific competence i.e. the mathematical (parallel) algorithms that are the basis of the codes, and education. In the case of Switzerland, the white paper 'Swiss National Strategic Plan for High Performance Computing

  18. National facility for advanced computational science: A sustainable path to scientific discovery

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  19. Topics in numerical partial differential equations and scientific computing

    CERN Document Server

    2016-01-01

    Numerical partial differential equations (PDEs) are an important part of numerical simulation, the third component of the modern methodology for science and engineering, besides the traditional theory and experiment. This volume contains papers that originated with the collaborative research of the teams that participated in the IMA Workshop for Women in Applied Mathematics: Numerical Partial Differential Equations and Scientific Computing in August 2014.

  20. Advanced Scientific Computing Research Exascale Requirements Review. An Office of Science review sponsored by Advanced Scientific Computing Research, September 27-29, 2016, Rockville, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Almgren, Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); DeMar, Phil [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Vetter, Jeffrey [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Riley, Katherine [Argonne Leadership Computing Facility, Argonne, IL (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bard, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Science Network; Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hack, James [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Monga, Inder [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Science Network; Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Rotman, Lauren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Science Network; Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bethel, Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bosilca, George [Univ. of Tennessee, Knoxville, TN (United States); Cappello, Frank [Argonne National Lab. (ANL), Argonne, IL (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Hill, Judy [Oak Ridge Leadership Computing Facility, Oak Ridge, TN (United States); Hollingsworth, Jeffrey K. [Univ. of Maryland, College Park, MD (United States); McInnes, Lois Curfman [Argonne National Lab. (ANL), Argonne, IL (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moore, Shirley [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Moreland, Ken [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roser, Rob [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Shende, Sameer [Univ. of Oregon, Eugene, OR (United States); Shipman, Galen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-06-20

    The widespread use of computing in the American economy would not be possible without a thoughtful, exploratory research and development (R&D) community pushing the performance edge of operating systems, computer languages, and software libraries. These are the tools and building blocks — the hammers, chisels, bricks, and mortar — of the smartphone, the cloud, and the computing services on which we rely. Engineers and scientists need ever-more specialized computing tools to discover new material properties for manufacturing, make energy generation safer and more efficient, and provide insight into the fundamentals of the universe, for example. The research division of the U.S. Department of Energy’s (DOE’s) Office of Advanced Scientific Computing and Research (ASCR Research) ensures that these tools and building blocks are being developed and honed to meet the extreme needs of modern science. See also http://exascaleage.org/ascr/ for additional information.

  1. Trend Analysis of the Brazilian Scientific Production in Computer Science

    Directory of Open Access Journals (Sweden)

    TRUCOLO, C. C.

    2014-12-01

    Full Text Available The growth of scientific information volume and diversity brings new challenges in order to understand the reasons, the process and the real essence that propel this growth. This information can be used as the basis for the development of strategies and public politics to improve the education and innovation services. Trend analysis is one of the steps in this way. In this work, trend analysis of Brazilian scientific production of graduate programs in the computer science area is made to identify the main subjects being studied by these programs in general and individual ways.

  2. Implementation of Scientific Computing Applications on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Guochun Shi

    2009-01-01

    Full Text Available The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.

  3. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  4. Scientific Computing Strategic Plan for the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Whiting, Eric Todd

    2015-01-01

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory's (INL's) challenge and charge, and is central to INL's ongoing success. Computing is an essential part of INL's future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing number of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.

  5. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    International Nuclear Information System (INIS)

    Tang, William M.

    2011-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  6. Scientific and computational challenges of the fusion simulation program (FSP)

    International Nuclear Information System (INIS)

    Tang, William M.

    2011-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) - a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  7. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    Energy Technology Data Exchange (ETDEWEB)

    Lucas, Robert [University of Southern California, Information Sciences Institute; Ang, James [Sandia National Laboratories; Bergman, Keren [Columbia University; Borkar, Shekhar [Intel; Carlson, William [Institute for Defense Analyses; Carrington, Laura [University of California, San Diego; Chiu, George [IBM; Colwell, Robert [DARPA; Dally, William [NVIDIA; Dongarra, Jack [University of Tennessee; Geist, Al [Oak Ridge National Laboratory; Haring, Rud [IBM; Hittinger, Jeffrey [Lawrence Livermore National Laboratory; Hoisie, Adolfy [Pacific Northwest National Laboratory; Klein, Dean Micron; Kogge, Peter [University of Notre Dame; Lethin, Richard [Reservoir Labs; Sarkar, Vivek [Rice University; Schreiber, Robert [Hewlett Packard; Shalf, John [Lawrence Berkeley National Laboratory; Sterling, Thomas [Indiana University; Stevens, Rick [Argonne National Laboratory; Bashor, Jon [Lawrence Berkeley National Laboratory; Brightwell, Ron [Sandia National Laboratories; Coteus, Paul [IBM; Debenedictus, Erik [Sandia National Laboratories; Hiller, Jon [Science and Technology Associates; Kim, K. H. [IBM; Langston, Harper [Reservoir Labs; Murphy, Richard Micron; Webster, Clayton [Oak Ridge National Laboratory; Wild, Stefan [Argonne National Laboratory; Grider, Gary [Los Alamos National Laboratory; Ross, Rob [Argonne National Laboratory; Leyffer, Sven [Argonne National Laboratory; Laros III, James [Sandia National Laboratories

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  8. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    International Nuclear Information System (INIS)

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs

  9. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs.

  10. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  11. Designing Scientific Software for Heterogeneous Computing

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig

    , algorithms and data structures must be designed to utilize the underlying parallel architecture. The architectural changes in hardware design within the last decade, from single to multi and many-core architectures, require software developers to identify and properly implement methods that both exploit...... makes parallel software design applicable, but also a challenge for scientific software developers at all levels. We have developed a generic C++ library for fast prototyping of large-scale PDEs solvers based on flexible-order finite difference approximations on structured regular grids. The library...... is designed with a high abstraction interface to improve developer productivity. The library is based on modern template-based design concepts as described in Glimberg, Engsig-Karup, Nielsen & Dammann (2013). The library utilizes heterogeneous CPU/GPU environments in order to maximize computational throughput...

  12. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  13. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  14. Symbolic/Numeric Approaches to Scientific Computing

    DEFF Research Database (Denmark)

    Sinclair, Robert

    1998-01-01

    A course held at the University of Basel (CH) in the winter semester 1997/98. The coupling of Maple with Fortran 77 code is discussed in detail, with many mathematical applications.......A course held at the University of Basel (CH) in the winter semester 1997/98. The coupling of Maple with Fortran 77 code is discussed in detail, with many mathematical applications....

  15. Computer-Supported Aids to Making Sense of Scientific Articles: Cognitive, Motivational, and Attitudinal Effects

    Science.gov (United States)

    Gegner, Julie A.; Mackay, Donald H. J.; Mayer, Richard E.

    2009-01-01

    High school students can access original scientific research articles on the Internet, but may have trouble understanding them. To address this problem of online literacy, the authors developed a computer-based prototype for guiding students' comprehension of scientific articles. High school students were asked to read an original scientific…

  16. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  17. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  18. Space and Earth Sciences, Computer Systems, and Scientific Data Analysis Support, Volume 1

    Science.gov (United States)

    Estes, Ronald H. (Editor)

    1993-01-01

    This Final Progress Report covers the specific technical activities of Hughes STX Corporation for the last contract triannual period of 1 June through 30 Sep. 1993, in support of assigned task activities at Goddard Space Flight Center (GSFC). It also provides a brief summary of work throughout the contract period of performance on each active task. Technical activity is presented in Volume 1, while financial and level-of-effort data is presented in Volume 2. Technical support was provided to all Division and Laboratories of Goddard's Space Sciences and Earth Sciences Directorates. Types of support include: scientific programming, systems programming, computer management, mission planning, scientific investigation, data analysis, data processing, data base creation and maintenance, instrumentation development, and management services. Mission and instruments supported include: ROSAT, Astro-D, BBXRT, XTE, AXAF, GRO, COBE, WIND, UIT, SMM, STIS, HEIDI, DE, URAP, CRRES, Voyagers, ISEE, San Marco, LAGEOS, TOPEX/Poseidon, Pioneer-Venus, Galileo, Cassini, Nimbus-7/TOMS, Meteor-3/TOMS, FIFE, BOREAS, TRMM, AVHRR, and Landsat. Accomplishments include: development of computing programs for mission science and data analysis, supercomputer applications support, computer network support, computational upgrades for data archival and analysis centers, end-to-end management for mission data flow, scientific modeling and results in the fields of space and Earth physics, planning and design of GSFC VO DAAC and VO IMS, fabrication, assembly, and testing of mission instrumentation, and design of mission operations center.

  19. Computer-assisted estimating for the Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Spooner, J.E.

    1976-02-01

    An analysis is made of the cost estimating system currently in use at the Los Alamos Scientific Laboratory (LASL) and the benefits of computer assistance are evaluated. A computer-assisted estimating system (CAE) is proposed for LASL. CAE can decrease turnaround and provide more flexible response to management requests for cost information and analyses. It can enhance value optimization at the design stage, improve cost control and change-order justification, and widen the use of cost information in the design process. CAE costs are not well defined at this time although they appear to break even with present operations. It is recommended that a CAE system description be submitted for contractor consideration and bid while LASL system development continues concurrently

  20. Strategic Plan for a Scientific Cloud Computing infrastructure for Europe

    CERN Document Server

    Lengert, Maryline

    2011-01-01

    Here we present the vision, concept and direction for forming a European Industrial Strategy for a Scientific Cloud Computing Infrastructure to be implemented by 2020. This will be the framework for decisions and for securing support and approval in establishing, initially, an R&D European Cloud Computing Infrastructure that serves the need of European Research Area (ERA ) and Space Agencies. This Cloud Infrastructure will have the potential beyond this initial user base to evolve to provide similar services to a broad range of customers including government and SMEs. We explain how this plan aims to support the broader strategic goals of our organisations and identify the benefits to be realised by adopting an industrial Cloud Computing model. We also outline the prerequisites and commitment needed to achieve these objectives.

  1. Mastering scientific computing with R

    CERN Document Server

    Gerrard, Paul

    2015-01-01

    If you want to learn how to quantitatively answer scientific questions for practical purposes using the powerful R language and the open source R tool ecosystem, this book is ideal for you. It is ideally suited for scientists who understand scientific concepts, know a little R, and want to be able to start applying R to be able to answer empirical scientific questions. Some R exposure is helpful, but not compulsory.

  2. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    International Nuclear Information System (INIS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  3. 10th International Conference on Scientific Computing in Electrical Engineering

    CERN Document Server

    Clemens, Markus; Günther, Michael; Maten, E

    2016-01-01

    This book is a collection of selected papers presented at the 10th International Conference on Scientific Computing in Electrical Engineering (SCEE), held in Wuppertal, Germany in 2014. The book is divided into five parts, reflecting the main directions of SCEE 2014: 1. Device Modeling, Electric Circuits and Simulation, 2. Computational Electromagnetics, 3. Coupled Problems, 4. Model Order Reduction, and 5. Uncertainty Quantification. Each part starts with a general introduction followed by the actual papers. The aim of the SCEE 2014 conference was to bring together scientists from academia and industry, mathematicians, electrical engineers, computer scientists, and physicists, with the goal of fostering intensive discussions on industrially relevant mathematical problems, with an emphasis on the modeling and numerical simulation of electronic circuits and devices, electromagnetic fields, and coupled problems. The methodological focus was on model order reduction and uncertainty quantification.

  4. Using Just-in-Time Information to Support Scientific Discovery Learning in a Computer-Based Simulation

    Science.gov (United States)

    Hulshof, Casper D.; de Jong, Ton

    2006-01-01

    Students encounter many obstacles during scientific discovery learning with computer-based simulations. It is hypothesized that an effective type of support, that does not interfere with the scientific discovery learning process, should be delivered on a "just-in-time" base. This study explores the effect of facilitating access to…

  5. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  6. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  7. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  8. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  9. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  10. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  11. New Chicago-Indiana computer network will handle dataflow from world's largest scientific experiment

    CERN Multimedia

    2006-01-01

    "Massive quantities of data will soon begin flowing from the largest scientific instrument ever built into an international netword of computer centers, including one operated jointly by the University of Chicago and Indiana University." (1,5 page)

  12. Computational Simulations and the Scientific Method

    Science.gov (United States)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  13. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  14. National seminar on public information on peaceful uses of nuclear energy - NUCInfo'98. Proceedings. Vol 2. Section B - Scientific and Technical Papers

    International Nuclear Information System (INIS)

    Dobos, Ion; Comsa, Olivia

    1998-01-01

    These proceedings published in two volumes contain materials presented at the National Seminar on Public Information on Peaceful uses of Nuclear Energy - NUCInfo'98 - held on 30th September to 2nd October 1998 at Bucharest, Romania. The first volume was dedicated to Public Information. The second volume containing 56 scientific and technical papers is structured in three parts according to the three sessions entitled: B 1 . Sustainable Development - Energy - Environment; B 2 . Nuclear safety, Radioactive Wastes, Radioprotection and Nuclear Safeguards; B 3 . Scientific Research, Technological Development and Applications of the Nuclear Energy in Economic and Social Life. The great majority of the contributions deal with specific public information technical problems and debates on nuclear issues mainly related to radiation monitoring, waste management, air, water and food contamination from accidental radioactive release or from fuel cycle relating activities. Also, several papers discuss the present status of national nuclear program, program management and legal matters in the nuclear field. A number of papers debate the questions related to radiation protection procedures and the safety problems at the Cernavoda NPP based on CANDU type reactors

  15. An integrated IaaS and PaaS architecture for scientific computing

    OpenAIRE

    Donvito, Giacinto; Blanquer, Ignacio

    2015-01-01

    Scientific applications often require multiple computing resources deployed on a coordinated way. The deployment of multiple resources require installing and configuring special software applications which should be updated when changes in the virtual infrastructure take place. When working on hybrid and federated cloud environments, restrictions on the hypervisor or cloud management platform must be minimised to facilitate geographic-wide brokering and cross-site deployments. Moreover, prese...

  16. Scientific and computational challenges of the fusion simulation project (FSP)

    International Nuclear Information System (INIS)

    Tang, W M

    2008-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Project (FSP). The primary objective is to develop advanced software designed to use leadership-class computers for carrying out multiscale physics simulations to provide information vital to delivering a realistic integrated fusion simulation model with unprecedented physics fidelity. This multiphysics capability will be unprecedented in that in the current FES applications domain, the largest-scale codes are used to carry out first-principles simulations of mostly individual phenomena in realistic 3D geometry while the integrated models are much smaller-scale, lower-dimensionality codes with significant empirical elements used for modeling and designing experiments. The FSP is expected to be the most up-to-date embodiment of the theoretical and experimental understanding of magnetically confined thermonuclear plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing a reliable ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices on all relevant time and space scales. From a computational perspective, the fusion energy science application goal to produce high-fidelity, whole-device modeling capabilities will demand computing resources in the petascale range and beyond, together with the associated multicore algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative device involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics

  17. Second Annual AEC Scientific Computer Information Exhange Meeting. Proceedings of the technical program theme: computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Peskin,A.M.; Shimamoto, Y.

    1974-01-01

    The topic of computer graphics serves well to illustrate that AEC affiliated scientific computing installations are well represented in the forefront of computing science activities. The participant response to the technical program was overwhelming--both in number of contributions and quality of the work described. Session I, entitled Advanced Systems, contains presentations describing systems that contain features not generally found in graphics facilities. These features can be roughly classified as extensions of standard two-dimensional monochromatic imaging to higher dimensions including color and time as well as multidimensional metrics. Session II presents seven diverse applications ranging from high energy physics to medicine. Session III describes a number of important developments in establishing facilities, techniques and enhancements in the computer graphics area. Although an attempt was made to schedule as many of these worthwhile presentations as possible, it appeared impossible to do so given the scheduling constraints of the meeting. A number of prospective presenters 'came to the rescue' by graciously withdrawing from the sessions. Some of their abstracts have been included in the Proceedings.

  18. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Prowell, Stacy J [ORNL; Symons, Christopher T [ORNL

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  19. The graphics future in scientific applications-trends and developments in computer graphics

    CERN Document Server

    Enderle, G

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations will appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education.

  20. Visualization in scientific computing

    National Research Council Canada - National Science Library

    Nielson, Gregory M; Shriver, Bruce D; Rosenblum, Lawrence J

    1990-01-01

    The purpose of this text is to provide a reference source to scientists, engineers, and students who are new to scientific visualization or who are interested in expanding their knowledge in this subject...

  1. Applied and numerical partial differential equations scientific computing in simulation, optimization and control in a multidisciplinary context

    CERN Document Server

    Glowinski, R; Kuznetsov, Y A; Periaux, Jacques; Neittaanmaki, Pekka; Pironneau, Olivier

    2010-01-01

    Standing at the intersection of mathematics and scientific computing, this collection of state-of-the-art papers in nonlinear PDEs examines their applications to subjects as diverse as dynamical systems, computational mechanics, and the mathematics of finance.

  2. Designing scientific applications on GPUs

    CERN Document Server

    Couturier, Raphael

    2013-01-01

    Many of today's complex scientific applications now require a vast amount of computational power. General purpose graphics processing units (GPGPUs) enable researchers in a variety of fields to benefit from the computational power of all the cores available inside graphics cards.Understand the Benefits of Using GPUs for Many Scientific ApplicationsDesigning Scientific Applications on GPUs shows you how to use GPUs for applications in diverse scientific fields, from physics and mathematics to computer science. The book explains the methods necessary for designing or porting your scientific appl

  3. Accelerating the scientific exploration process with scientific workflows

    International Nuclear Information System (INIS)

    Altintas, Ilkay; Barney, Oscar; Cheng, Zhengang; Critchlow, Terence; Ludaescher, Bertram; Parker, Steve; Shoshani, Arie; Vouk, Mladen

    2006-01-01

    Although an increasing amount of middleware has emerged in the last few years to achieve remote data access, distributed job execution, and data management, orchestrating these technologies with minimal overhead still remains a difficult task for scientists. Scientific workflow systems improve this situation by creating interfaces to a variety of technologies and automating the execution and monitoring of the workflows. Workflow systems provide domain-independent customizable interfaces and tools that combine different tools and technologies along with efficient methods for using them. As simulations and experiments move into the petascale regime, the orchestration of long running data and compute intensive tasks is becoming a major requirement for the successful steering and completion of scientific investigations. A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions of a scientific problem. Kepler is a cross-project collaboration, co-founded by the SciDAC Scientific Data Management (SDM) Center, whose purpose is to develop a domain-independent scientific workflow system. It provides a workflow environment in which scientists design and execute scientific workflows by specifying the desired sequence of computational actions and the appropriate data flow, including required data transformations, between these steps. Currently deployed workflows range from local analytical pipelines to distributed, high-performance and high-throughput applications, which can be both data- and compute-intensive. The scientific workflow approach offers a number of advantages over traditional scripting-based approaches, including ease of configuration, improved reusability and maintenance of workflows and components (called actors), automated provenance management, 'smart' re-running of different versions of workflow instances, on-the-fly updateable parameters, monitoring

  4. Sign use and cognition in automated scientific discovery: are computers only special kinds of signs?

    Science.gov (United States)

    Giza, Piotr

    2018-04-01

    James Fetzer criticizes the computational paradigm, prevailing in cognitive science by questioning, what he takes to be, its most elementary ingredient: that cognition is computation across representations. He argues that if cognition is taken to be a purposive, meaningful, algorithmic problem solving activity, then computers are incapable of cognition. Instead, they appear to be signs of a special kind, that can facilitate computation. He proposes the conception of minds as semiotic systems as an alternative paradigm for understanding mental phenomena, one that seems to overcome the difficulties of computationalism. Now, I argue, that with computer systems dealing with scientific discovery, the matter is not so simple as that. The alleged superiority of humans using signs to stand for something other over computers being merely "physical symbol systems" or "automatic formal systems" is only easy to establish in everyday life, but becomes far from obvious when scientific discovery is at stake. In science, as opposed to everyday life, the meaning of symbols is, apart from very low-level experimental investigations, defined implicitly by the way the symbols are used in explanatory theories or experimental laws relevant to the field, and in consequence, human and machine discoverers are much more on a par. Moreover, the great practical success of the genetic programming method and recent attempts to apply it to automatic generation of cognitive theories seem to show, that computer systems are capable of very efficient problem solving activity in science, which is neither purposive nor meaningful, nor algorithmic. This, I think, undermines Fetzer's argument that computer systems are incapable of cognition because computation across representations is bound to be a purposive, meaningful, algorithmic problem solving activity.

  5. The application of cloud computing to scientific workflows: a study of cost and performance.

    Science.gov (United States)

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  6. Applications of industrial computed tomography at Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Kruger, R.P.; Morris, R.A.; Wecksung, G.W.

    1980-01-01

    A research and development program was begun three years ago at the Los Alamos Scientific Laboratory (LASL) to study nonmedical applications of computed tomography. This program had several goals. The first goal was to develop the necessary reconstruction algorithms to accurately reconstruct cross sections of nonmedical industrial objects. The second goal was to be able to perform extensive tomographic simulations to determine the efficacy of tomographic reconstruction with a variety of hardware configurations. The final goal was to construct an inexpensive industrial prototype scanner with a high degree of design flexibility. The implementation of these program goals is described

  7. Scientific Services on the Cloud

    Science.gov (United States)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  8. Smolyak's algorithm: A powerful black box for the acceleration of scientific computations

    KAUST Repository

    Tempone, Raul; Wolfers, Soeren

    2017-01-01

    We provide a general discussion of Smolyak's algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak's work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak's algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.

  9. Smolyak's algorithm: A powerful black box for the acceleration of scientific computations

    KAUST Repository

    Tempone, Raul

    2017-03-26

    We provide a general discussion of Smolyak\\'s algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak\\'s work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak\\'s algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.

  10. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2012-01-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  11. The radar signature of revolution objects in scientific computing

    International Nuclear Information System (INIS)

    Bonnemason, P.; Le Martret, R.; Scheurer, B.; Stupfel, B.

    1990-12-01

    This work is motivated by the study of stealthy (or discrete) revolution objects vis-a-vis a radar. Efficient algorithms, specific numerical methods and two original industrial software (SHF 89 and SHF C) have been developed. These are reliable tools in intensive scientific computing. In particular, they have enabled the precise numerical modeling of complex objects, of very general forms, in the field of high frequencies and a thorough understanding of the physics of the problems involved. The purpose of this note is a general description of the work and its context, which is illustrated by examples of numerical applications (presented in Appendix 4). The technical aspects are detailed in reports and publications (a list is attached to this note) [fr

  12. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  13. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    Science.gov (United States)

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  14. Computer simulation, rhetoric, and the scientific imagination how virtual evidence shapes science in the making and in the news

    CERN Document Server

    Roundtree, Aimee Kendall

    2013-01-01

    Computer simulations help advance climatology, astrophysics, and other scientific disciplines. They are also at the crux of several high-profile cases of science in the news. How do simulation scientists, with little or no direct observations, make decisions about what to represent? What is the nature of simulated evidence, and how do we evaluate its strength? Aimee Kendall Roundtree suggests answers in Computer Simulation, Rhetoric, and the Scientific Imagination. She interprets simulations in the sciences by uncovering the argumentative strategies that underpin the production and disseminati

  15. Studying Scientific Discovery by Computer Simulation.

    Science.gov (United States)

    1983-03-30

    Mendel’s laws of inheritance, the law of Gay- Lussac for gaseous reactions, tile law of Dulong and Petit, the derivation of atomic weights by Avogadro...neceseary mid identify by block number) scientific discovery -ittri sic properties physical laws extensive terms data-driven heuristics intensive...terms theory-driven heuristics conservation laws 20. ABSTRACT (Continue on revere. side It necessary and identify by block number) Scientific discovery

  16. Scientific Grand Challenges: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.; Johnson, Gary M.; Washington, Warren M.

    2009-07-02

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) in partnership with the Office of Advanced Scientific Computing Research (ASCR) held a workshop on the challenges in climate change science and the role of computing at the extreme scale, November 6-7, 2008, in Bethesda, Maryland. At the workshop, participants identified the scientific challenges facing the field of climate science and outlined the research directions of highest priority that should be pursued to meet these challenges. Representatives from the national and international climate change research community as well as representatives from the high-performance computing community attended the workshop. This group represented a broad mix of expertise. Of the 99 participants, 6 were from international institutions. Before the workshop, each of the four panels prepared a white paper, which provided the starting place for the workshop discussions. These four panels of workshop attendees devoted to their efforts the following themes: Model Development and Integrated Assessment; Algorithms and Computational Environment; Decadal Predictability and Prediction; Data, Visualization, and Computing Productivity. The recommendations of the panels are summarized in the body of this report.

  17. Software support for students engaging in scientific activity and scientific controversy

    Science.gov (United States)

    Cavalli-Sforza, Violetta; Weiner, Arlene W.; Lesgold, Alan M.

    Computer environments could support students in engaging in cognitive activities that are essential to scientific practice and to the understanding of the nature of scientific knowledge, but that are difficult to manage in science classrooms. The authors describe a design for a computer-based environment to assist students in conducting dialectical activities of constructing, comparing, and evaluating arguments for competing scientific theories. Their choice of activities and their design respond to educators' and theorists' criticisms of current science curricula. They give detailed specifications of portions of the environment.

  18. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  19. Computer application in scientific investigations

    International Nuclear Information System (INIS)

    Govorun, N.N.

    1981-01-01

    A short review of the computer development and application and software in JINR for the last 15 years is presented. Main trends of studies on computer application in experimental and theoretical investigations are enumerated: software of computers and their systems, software of data processing systems, designing automatic and automized systems for measuring track detectors images, development of technique of carrying out experiments on computer line, packets of applied computer codes and specialized systems. The development of the on line technique is successfully used in investigations of nuclear processes at relativistic energies. The new trend is the development of television methods of data output and its computer recording [ru

  20. PARA'04 Workshop on State-of-the-art in Scientific Computing, June 20-23, 2004: Complementary Proceedings

    DEFF Research Database (Denmark)

    Dongarra, Jack; Madsen, Kaj; Wasniewski, Jerzy

    2004-01-01

    , was held in Lyngby, Denmark, June 20-23, 2004. The PARA'04 Workshop was organized by Jack Dongarra from the University of Tennessee and Oak Ridge National Laboratory, and Kaj Madsen and Jerzy Wasniewski from the Technical University of Denmark. The emphasis here was shifted to High-Performance Computing...... (HPC). The ongoing development of ever more advanced computers provides the potential for solving increasingly dif cult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. For example, the failure......The PARA workshops in the past have been devoted to parallel computing methods in science and technology. There have been seven PARA meetings to date: PARA'94, PARA'95 and PARA'96 in Lyngby, Denmark, PARA'98 in Umeå, Sweden, PARA'2000 in Bergen, Norway, PARA'02 in Espoo, Finland, and PARA'04 again...

  1. N286.7-99, A Canadian standard specifying software quality management system requirements for analytical, scientific, and design computer programs and its implementation at AECL

    International Nuclear Information System (INIS)

    Abel, R.

    2000-01-01

    Analytical, scientific, and design computer programs (referred to in this paper as 'scientific computer programs') are developed for use in a large number of ways by the user-engineer to support and prove engineering calculations and assumptions. These computer programs are subject to frequent modifications inherent in their application and are often used for critical calculations and analysis relative to safety and functionality of equipment and systems. N286.7-99(4) was developed to establish appropriate quality management system requirements to deal with the development, modification, and application of scientific computer programs. N286.7-99 provides particular guidance regarding the treatment of legacy codes

  2. An Analysis on the Effect of Computer Self-Efficacy over Scientific Research Self-Efficacy and Information Literacy Self-Efficacy

    Science.gov (United States)

    Tuncer, Murat

    2013-01-01

    Present research investigates reciprocal relations amidst computer self-efficacy, scientific research and information literacy self-efficacy. Research findings have demonstrated that according to standardized regression coefficients, computer self-efficacy has a positive effect on information literacy self-efficacy. Likewise it has been detected…

  3. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    Science.gov (United States)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  4. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    Directory of Open Access Journals (Sweden)

    Xing Cai

    2005-01-01

    Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.

  5. STS-98 U.S. Lab Destiny rests in Atlantis' payload bay

    Science.gov (United States)

    2001-01-01

    KENNEDY SPACE CENTER, Fla. -- The U.S. Lab Destiny rests in the payload bay of Space Shuttle Atlantis. A key element in the construction of the International Space Station, Destiny is 28 feet long and weighs 16 tons. This research and command-and-control center is the most sophisticated and versatile space laboratory ever built. It will ultimately house a total of 23 experiment racks for crew support and scientific research. Destiny will fly on STS-98, the seventh construction flight to the ISS. Launch of STS-98 is scheduled for Jan. 19 at 2:11 a.m. EST.

  6. SCEE 2008 book of abstracts. The 7. international conference on scientific computing in electrical engineering (SCEE 2008)

    Energy Technology Data Exchange (ETDEWEB)

    Roos, J.; Costa, L.R.J. (ed.)

    2008-09-15

    SCEE is an international conference series dedicated to Scientific Computing in Electrical Engineering. The 7th International Conference on Scientific Computing in Electrical Engineering (SCEE 2008) in Espoo, Finland, is organized by the Helsinki University of Technology (TKK); Faculty of Electronics, Communications and Automation (ECA); Department of Radio Science and Engineering (RAD); Circuit Theory Group. (SCEE 2008 web site: http://www.ct.tkk.fi/scee2008/). The aim of the SCEE 2008 conference is to bring together scientists from academia and industry with the goal of intensive discussions on modeling and numerical simulation of electronic circuits and of electromagnetic fields. The conference is mainly directed towards mathematicians and electrical engineers. The SCEE 2008 conference has the following four main topics: 1. Computational Electromagnetics (CE), 2. Circuit Simulation (CS), 3. Coupled Problems (CP), 4. Mathematical and Computational Methods (CM). The selection of abstracts in this book was carried out by the Program Committee; each abstract was reviewed by two or three reviewers. The authors of all accepted abstracts were invited to submit an extended full paper, which will be reviewed as well. The accepted full papers will later on be published in a separate post-conference book

  7. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  8. A data management system for engineering and scientific computing

    Science.gov (United States)

    Elliot, L.; Kunii, H. S.; Browne, J. C.

    1978-01-01

    Data elements and relationship definition capabilities for this data management system are explicitly tailored to the needs of engineering and scientific computing. System design was based upon studies of data management problems currently being handled through explicit programming. The system-defined data element types include real scalar numbers, vectors, arrays and special classes of arrays such as sparse arrays and triangular arrays. The data model is hierarchical (tree structured). Multiple views of data are provided at two levels. Subschemas provide multiple structural views of the total data base and multiple mappings for individual record types are supported through the use of a REDEFINES capability. The data definition language and the data manipulation language are designed as extensions to FORTRAN. Examples of the coding of real problems taken from existing practice in the data definition language and the data manipulation language are given.

  9. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  10. Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.

    Science.gov (United States)

    Scheeline, Alexander; Mork, Brian J.

    1988-01-01

    Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)

  11. The Goal Specificity Effect on Strategy Use and Instructional Efficiency during Computer-Based Scientific Discovery Learning

    Science.gov (United States)

    Kunsting, Josef; Wirth, Joachim; Paas, Fred

    2011-01-01

    Using a computer-based scientific discovery learning environment on buoyancy in fluids we investigated the "effects of goal specificity" (nonspecific goals vs. specific goals) for two goal types (problem solving goals vs. learning goals) on "strategy use" and "instructional efficiency". Our empirical findings close an important research gap,…

  12. Availability measurement of grid services from the perspective of a scientific computing centre

    International Nuclear Information System (INIS)

    Marten, H; Koenig, T

    2011-01-01

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard 'IT Infrastructure Library (ITIL)' was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  13. Function of Nup98 subtypes and their fusion proteins, Nup98-TopIIβ and Nup98-SETBP1 in nuclear-cytoplasmic transport.

    Science.gov (United States)

    Saito, Shoko; Yokokawa, Takafumi; Iizuka, Gemmei; Cigdem, Sadik; Okuwaki, Mitsuru; Nagata, Kyosuke

    2017-05-20

    Nup98 is a component of the nuclear pore complex. The nup98-fusion genes derived by chromosome translocations are involved in hematopoietic malignancies. Here, we investigated the functions of Nup98 isoforms and two unexamined Nup98-fusion proteins, Nup98-TopIIβ and Nup98-SETBP1. We first demonstrated that two Nup98 isoforms are expressed in various mouse tissues and similarly localized in the nucleus and the nuclear envelope. We also showed that Nup98-TopIIβ and Nup98-SETBP1 are localized in the nucleus and partially co-localized with full-length Nup98 and a nuclear export receptor XPO1. We demonstrated that Nup98-TopIIβ and Nup98-SETBP1 negatively regulate the XPO1-mediated protein export. Our results will contribute to the understanding of the molecular mechanism by which the Nup98-fusion proteins induce tumorigenesis. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  15. STS-98 Destiny in Atlantis's payload bay

    Science.gov (United States)

    2001-01-01

    KENNEDY SPACE CENTER, Fla. -- The U.S. Laboratory Destiny rests once again in Atlantis'''s payload bay, at Launch Pad 39A. Closing of the payload bay doors is imminent. Destiny, a key element in the construction of the International Space Station, is 28 feet long and weighs 16 tons. This research and command-and-control center is the most sophisticated and versatile space laboratory ever built. It will ultimately house a total of 23 experiment racks for crew support and scientific research. Destiny will be launched Feb. 7 on STS-98, the seventh construction flight to the ISS.

  16. Amplify scientific discovery with artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Gil, Yolanda; Greaves, Mark T.; Hendler, James; Hirsch, Hyam

    2014-10-10

    Computing innovations have fundamentally changed many aspects of scientific inquiry. For example, advances in robotics, high-end computing, networking, and databases now underlie much of what we do in science such as gene sequencing, general number crunching, sharing information between scientists, and analyzing large amounts of data. As computing has evolved at a rapid pace, so too has its impact in science, with the most recent computing innovations repeatedly being brought to bear to facilitate new forms of inquiry. Recently, advances in Artificial Intelligence (AI) have deeply penetrated many consumer sectors, including for example Apple’s Siri™ speech recognition system, real-time automated language translation services, and a new generation of self-driving cars and self-navigating drones. However, AI has yet to achieve comparable levels of penetration in scientific inquiry, despite its tremendous potential in aiding computers to help scientists tackle tasks that require scientific reasoning. We contend that advances in AI will transform the practice of science as we are increasingly able to effectively and jointly harness human and machine intelligence in the pursuit of major scientific challenges.

  17. ETRAN '98: Society for Electronics,Telecommunications, Computers, Automation and Nuclear Engineering. Section for Nuclear Techniques and Technology. Proceedings of the XLII Conference. Vol IV

    International Nuclear Information System (INIS)

    Matausek, M.

    1998-01-01

    This volume 4, includes 29 papers presented at the Sessions on Nuclear science and technology of the ETRAN '98 conference. The topics covered were related to solving the problems of the spent fuels storage at the RA reactor, fuel management calculations, computation of neutron and photon spectra at the RA and RB reactors, applications of radiation sources in medicine and industry, radiation protection issues

  18. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    Science.gov (United States)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  19. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Jayatilaka, B. [Fermilab; Levshina, T. [Fermilab; Sehgal, C. [Fermilab; Gardner, R. [Chicago U.; Rynge, M. [USC - ISI, Marina del Rey; Würthwein, F. [UC, San Diego

    2017-11-22

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  20. Good enough practices in scientific computing.

    Science.gov (United States)

    Wilson, Greg; Bryan, Jennifer; Cranston, Karen; Kitzes, Justin; Nederbragt, Lex; Teal, Tracy K

    2017-06-01

    Computers are now essential in all branches of science, but most researchers are never taught the equivalent of basic lab skills for research computing. As a result, data can get lost, analyses can take much longer than necessary, and researchers are limited in how effectively they can work with software and data. Computing workflows need to follow the same practices as lab projects and notebooks, with organized data, documented steps, and the project structured for reproducibility, but researchers new to computing often don't know where to start. This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices, which encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts, are drawn from a wide variety of published sources from our daily lives and from our work with volunteer organizations that have delivered workshops to over 11,000 people since 2010.

  1. PS3 CELL Development for Scientific Computation and Research

    Science.gov (United States)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  2. Scientific-creative thinking and academic achievement

    Directory of Open Access Journals (Sweden)

    Rosario Bermejo

    2014-07-01

    Full Text Available The aim of this work is to study the relationship between scientific-creative thinking construct and academic performance in a sample of adolescents. In addition, the scientific-creative thinking instrument’s reliability will be tested. The sample was composed of 98 students (aged between 12-16 years old attending to a Secondary School in Murcia Region (Spain. The used instruments were: a the Scientific-Creative Thinking Test designed by Hu and Adey (2002, which was adapted to the Spanish culture by the High Abilities research team at Murcia University. The test is composed of 7 task based in the Scientific Creative Structure Model. It assesses the dimensions fluency, flexibility and originality; b The General and Factorial Intelligence Test (IGF/5r; Yuste, 2002, which assess the abilities of general intelligence and logic reasoning, verbal reasoning, numerical reasoning and spatial reasoning; c Students’ academic achievement by domains (scientific-technological, social-linguistic and artistic was collected. The results showed positive and statistical significant correlations between the scientific-creative tasks and academic achievement of different domains.

  3. Using Cloud-Computing Applications to Support Collaborative Scientific Inquiry: Examining Pre-Service Teachers' Perceived Barriers to Integration

    Science.gov (United States)

    Donna, Joel D.; Miller, Brant G.

    2013-01-01

    Technology plays a crucial role in facilitating collaboration within the scientific community. Cloud-computing applications, such as Google Drive, can be used to model such collaboration and support inquiry within the secondary science classroom. Little is known about pre-service teachers' beliefs related to the envisioned use of collaborative,…

  4. Computer-supported analysis of scientific measurements

    NARCIS (Netherlands)

    de Jong, Hidde

    1998-01-01

    In the past decade, large-scale databases and knowledge bases have become available to researchers working in a range of scientific disciplines. In many cases these databases and knowledge bases contain measurements of properties of physical objects which have been obtained in experiments or at

  5. Certification of version 1.2 of the PORFLO-3 code for the WHC scientific and engineering computational center

    International Nuclear Information System (INIS)

    Kline, N.W.

    1994-01-01

    Version 1.2 of the PORFLO-3 Code has migrated from the Hanford Cray computer to workstations in the WHC Scientific and Engineering Computational Center. The workstation-based configuration and acceptance testing are inherited from the CRAY-based configuration. The purpose of this report is to document differences in the new configuration as compared to the parent Cray configuration, and summarize some of the acceptance test results which have shown that the migrated code is functioning correctly in the new environment

  6. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    International Nuclear Information System (INIS)

    Bogdanov, A.V.; Yuzhanin, N.V.; Zolotarev, V.I.; Ezhakova, T.R.

    2017-01-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is reviewed and development of the corresponding elements of the system is described in the present paper.

  7. I - Template Metaprogramming for Massively Parallel Scientific Computing - Expression Templates

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty. These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods i...

  8. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  9. Sudden Cardiac Risk Stratification with Electrocardiographic Indices - A Review on Computational Processing, Technology Transfer, and Scientific Evidence

    Directory of Open Access Journals (Sweden)

    Francisco Javier eGimeno-Blanes

    2016-03-01

    Full Text Available Great effort has been devoted in recent years to the development of sudden cardiac risk predictors as a function of electric cardiac signals, mainly obtained from the electrocardiogram (ECG analysis. But these prediction techniques are still seldom used in clinical practice, partly due to its limited diagnostic accuracy and to the lack of consensus about the appropriate computational signal processing implementation. This paper addresses a three-fold approach, based on ECG indexes, to structure this review on sudden cardiac risk stratification. First, throughout the computational techniques that had been widely proposed for obtaining these indexes in technical literature. Second, over the scientific evidence, that although is supported by observational clinical studies, they are not always representative enough. And third, via the limited technology transfer of academy-accepted algorithms, requiring further meditation for future systems. We focus on three families of ECG derived indexes which are tackled from the aforementioned viewpoints, namely, heart rate turbulence, heart rate variability, and T-wave alternans. In terms of computational algorithms, we still need clearer scientific evidence, standardizing, and benchmarking, siting on advanced algorithms applied over large and representative datasets. New scenarios like electronic health recordings, big data, long-term monitoring, and cloud databases, will eventually open new frameworks to foresee suitable new paradigms in the near future.

  10. Frontiers of massively parallel scientific computation

    International Nuclear Information System (INIS)

    Fischer, J.R.

    1987-07-01

    Practical applications using massively parallel computer hardware first appeared during the 1980s. Their development was motivated by the need for computing power orders of magnitude beyond that available today for tasks such as numerical simulation of complex physical and biological processes, generation of interactive visual displays, satellite image analysis, and knowledge based systems. Representative of the first generation of this new class of computers is the Massively Parallel Processor (MPP). A team of scientists was provided the opportunity to test and implement their algorithms on the MPP. The first results are presented. The research spans a broad variety of applications including Earth sciences, physics, signal and image processing, computer science, and graphics. The performance of the MPP was very good. Results obtained using the Connection Machine and the Distributed Array Processor (DAP) are presented

  11. Accelerating scientific discovery : 2007 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, P.; Dave, P.; Drugan, C.

    2008-11-14

    As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis of Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide

  12. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    Science.gov (United States)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  13. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Kostadin, Damevski [Virginia State Univ., Petersburg, VA (United States)

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  14. Bridge over troubled water - valuing Russia's scientific landscape

    Czech Academy of Sciences Publication Activity Database

    Büntgen, Ulf

    2016-01-01

    Roč. 70, č. 1 (2016), s. 95-98 ISSN 0936-577X R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:67179843 Keywords : Environmental change * Brain drain * Interdisciplinary research * Paleo-archives * Russia * Scientific collaboration Subject RIV: EH - Ecology, Behaviour Impact factor: 1.578, year: 2016

  15. Scientific Grand Challenges: Discovery In Basic Energy Sciences: The Role of Computing at the Extreme Scale - August 13-15, 2009, Washington, D.C.

    Energy Technology Data Exchange (ETDEWEB)

    Galli, Giulia [Univ. of California, Davis, CA (United States). Workshop Chair; Dunning, Thom [Univ. of Illinois, Urbana, IL (United States). Workshop Chair

    2009-08-13

    The U.S. Department of Energy’s (DOE) Office of Basic Energy Sciences (BES) and Office of Advanced Scientific Computing Research (ASCR) workshop in August 2009 on extreme-scale computing provided a forum for more than 130 researchers to explore the needs and opportunities that will arise due to expected dramatic advances in computing power over the next decade. This scientific community firmly believes that the development of advanced theoretical tools within chemistry, physics, and materials science—combined with the development of efficient computational techniques and algorithms—has the potential to revolutionize the discovery process for materials and molecules with desirable properties. Doing so is necessary to meet the energy and environmental challenges of the 21st century as described in various DOE BES Basic Research Needs reports. Furthermore, computational modeling and simulation are a crucial complement to experimental studies, particularly when quantum mechanical processes controlling energy production, transformations, and storage are not directly observable and/or controllable. Many processes related to the Earth’s climate and subsurface need better modeling capabilities at the molecular level, which will be enabled by extreme-scale computing.

  16. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    International Nuclear Information System (INIS)

    Hules, John A.

    2008-01-01

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics

  17. Toward executable scientific publications

    NARCIS (Netherlands)

    Strijkers, R.J.; Cushing, R.; Vasyunin, D.; Laat, C. de; Belloum, A.S.Z.; Meijer, R.J.

    2011-01-01

    Reproducibility of experiments is considered as one of the main principles of the scientific method. Recent developments in data and computation intensive science, i.e. e-Science, and state of the art in Cloud computing provide the necessary components to preserve data sets and re-run code and

  18. Panorama 98. 16 thematic files

    International Nuclear Information System (INIS)

    Anon.

    1998-01-01

    Under the 'Panorama 98' title, the Institut Francais du Petrole (IFP) has organized an international colloquium on the most recent events and the foreseeable perspectives in the domain of hydrocarbons. 16 technical and economical documentation files were given to the participants. This paper presents a synthesis of the content of these files: the oil offer and demand, the worldwide petroleum activity, the recent oil and gas discoveries, the stakes of the oil and gas perspectives in the North Sea, the US renewal in the Gulf of Mexico, the exploitation of heavy and extra-heavy crudes in Venezuela, the Asian oil rise, the limits of oil recovery efficiency, the oil markets, the international overview of the natural gas industry, the perspectives of the LNG market, the recent evolution and the perspectives of the European refining activities, the oil companies in 1997, the future of marketing, the trends of European automotive industry, and the scientific certitudes and uncertainties about urban pollution. (J.S.)

  19. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  20. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  1. STS-98 U.S. Lab Destiny rests in Atlantis' payload bay

    Science.gov (United States)

    2001-01-01

    KENNEDY SPACE CENTER, Fla. -- The U.S. Lab Destiny rests in the payload bay of Space Shuttle Atlantis before closure of the doors. A key element in the construction of the International Space Station, Destiny is 28 feet long and weighs 16 tons. Destiny will be attached to the Unity node on the ISS using the Shuttle'''s robot arm, seen here on the left side, with the help of an elbow camera attached to the arm (near the upper end of the lab in the photo). This research and command-and-control center is the most sophisticated and versatile space laboratory ever built. It will ultimately house a total of 23 experiment racks for crew support and scientific research. Destiny will fly on STS-98, the seventh construction flight to the ISS. Launch of STS-98 is scheduled for Jan. 19 at 2:11 a.m. EST.

  2. Bygningers varmebehov 98, Bv98

    DEFF Research Database (Denmark)

    Bygningers energibehov og i SBI-anvisning 189 Småhuse. Programmet kan anvendes på pc under operativsystemet Microsoft Windows 95, Windows 98 og Windows NT 4.0, Servicepack 3 eller nyere. Pc-programmet er en revideret og moderniseret udgave af pc-programmet Bygningers Varmebehov, BV95, som SBI udgav i 1995....

  3. From Data to Knowledge to Discoveries: Artificial Intelligence and Scientific Workflows

    Directory of Open Access Journals (Sweden)

    Yolanda Gil

    2009-01-01

    Full Text Available Scientific computing has entered a new era of scale and sharing with the arrival of cyberinfrastructure facilities for computational experimentation. A key emerging concept is scientific workflows, which provide a declarative representation of complex scientific applications that can be automatically managed and executed in distributed shared resources. In the coming decades, computational experimentation will push the boundaries of current cyberinfrastructure in terms of inter-disciplinary scope and integrative models of scientific phenomena under study. This paper argues that knowledge-rich workflow environments will provide necessary capabilities for that vision by assisting scientists to validate and vet complex analysis processes and by automating important aspects of scientific exploration and discovery.

  4. [Text mining, a method for computer-assisted analysis of scientific texts, demonstrated by an analysis of author networks].

    Science.gov (United States)

    Hahn, P; Dullweber, F; Unglaub, F; Spies, C K

    2014-06-01

    Searching for relevant publications is becoming more difficult with the increasing number of scientific articles. Text mining as a specific form of computer-based data analysis may be helpful in this context. Highlighting relations between authors and finding relevant publications concerning a specific subject using text analysis programs are illustrated graphically by 2 performed examples. © Georg Thieme Verlag KG Stuttgart · New York.

  5. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  6. ENC 98

    International Nuclear Information System (INIS)

    1998-01-01

    This press dossier document reports on the main points presented by the Framatome group at the ENC'98 colloquium which took place in Nice (France) on September 25 1998. The summary comprises 4 main parts: the 1997-98 highlights (nuclear realizations in France, China and Turkey, nuclear services, nuclear fuels), the international activities in the nuclear domain (USA, China, Eastern Europe), the research and development activities (the French-German joint EPR project, the high temperature reactor (HTR) project), and the service and nuclear fuel activities (maintenance, improvements, control and expertise). (J.S.)

  7. The effectiveness of web-programming module based on scientific approach to train logical thinking ability for students in vocational high school

    Science.gov (United States)

    Nashiroh, Putri Khoirin; Kamdi, Waras; Elmunsyah, Hakkun

    2017-09-01

    Web programming is a basic subject in Computer and Informatics Engineering, a program study in a vocational high school. It requires logical thinking ability in its learning activities. The purposes of this research were (1) to develop a web programming module that implement scientific approach that can improve logical thinking ability for students in vocational high school; and (2) to test the effectiveness of web programming module based on scientific approach to train students' logical thinking ability. The results of this research was a web-programming module that apply scientific approach for learning activities to improve logical thinking ability of students in the vocational high school. The results of the effectiveness test of web-programming module give conclusion that it was very effective to train logical thinking ability and to improve learning result, this conclusion was supported by: (1) the average of posttest result of students exceeds the minimum criterion value, it was 79.91; (2) the average percentage of students' logical thinking score is 82,98; and (3) the average percentage of students' responses to the web programming module was 81.86%.

  8. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    CERN Document Server

    Bagnasco, S; Guarise, A; Lusso, S; Masera, M; Vallero, S

    2015-01-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monit...

  9. The next scientific revolution.

    Science.gov (United States)

    Hey, Tony

    2010-11-01

    For decades, computer scientists have tried to teach computers to think like human experts. Until recently, most of those efforts have failed to come close to generating the creative insights and solutions that seem to come naturally to the best researchers, doctors, and engineers. But now, Tony Hey, a VP of Microsoft Research, says we're witnessing the dawn of a new generation of powerful computer tools that can "mash up" vast quantities of data from many sources, analyze them, and help produce revolutionary scientific discoveries. Hey and his colleagues call this new method of scientific exploration "machine learning." At Microsoft, a team has already used it to innovate a method of predicting with impressive accuracy whether a patient with congestive heart failure who is released from the hospital will be readmitted within 30 days. It was developed by directing a computer program to pore through hundreds of thousands of data points on 300,000 patients and "learn" the profiles of patients most likely to be rehospitalized. The economic impact of this prediction tool could be huge: If a hospital understands the likelihood that a patient will "bounce back," it can design programs to keep him stable and save thousands of dollars in health care costs. Similar efforts to uncover important correlations that could lead to scientific breakthroughs are under way in oceanography, conservation, and AIDS research. And in business, deep data exploration has the potential to unearth critical insights about customers, supply chains, advertising effectiveness, and more.

  10. Scalability of Parallel Scientific Applications on the Cloud

    Directory of Open Access Journals (Sweden)

    Satish Narayana Srirama

    2011-01-01

    Full Text Available Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects of moving parallel scientific applications onto the cloud, we deployed several benchmark applications like matrix–vector operations and NAS parallel benchmarks, and DOUG (Domain decomposition On Unstructured Grids on the cloud. DOUG is an open source software package for parallel iterative solution of very large sparse systems of linear equations. The detailed analysis of DOUG on the cloud showed that parallel applications benefit a lot and scale reasonable on the cloud. We could also observe the limitations of the cloud and its comparison with cluster in terms of performance. However, for efficiently running the scientific applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. Several iterative and embarrassingly parallel algorithms are reduced to the MapReduce model and their performance is measured and analyzed. The analysis showed that Hadoop MapReduce has significant problems with iterative methods, while it suits well for embarrassingly parallel algorithms. Scientific computing often uses iterative methods to solve large problems. Thus, for scientific computing on the cloud, this paper raises the necessity for better frameworks or optimizations for MapReduce.

  11. Reduced-order modeling (ROM) for simulation and optimization powerful algorithms as key enablers for scientific computing

    CERN Document Server

    Milde, Anja; Volkwein, Stefan

    2018-01-01

    This edited monograph collects research contributions and addresses the advancement of efficient numerical procedures in the area of model order reduction (MOR) for simulation, optimization and control. The topical scope includes, but is not limited to, new out-of-the-box algorithmic solutions for scientific computing, e.g. reduced basis methods for industrial problems and MOR approaches for electrochemical processes. The target audience comprises research experts and practitioners in the field of simulation, optimization and control, but the book may also be beneficial for graduate students alike. .

  12. Business and scientific workflows a web service-oriented approach

    CERN Document Server

    Tan, Wei

    2013-01-01

    Focuses on how to use web service computing and service-based workflow technologies to develop timely, effective workflows for both business and scientific fields Utilizing web computing and Service-Oriented Architecture (SOA), Business and Scientific Workflows: A Web Service-Oriented Approach focuses on how to design, analyze, and deploy web service-based workflows for both business and scientific applications in many areas of healthcare and biomedicine. It also discusses and presents the recent research and development results. This informative reference features app

  13. Collaborative e-Science Experiments and Scientific Workflows

    NARCIS (Netherlands)

    Belloum, A.; Inda, M.A.; Vasunin, D.; Korkhov, V.; Zhao, Z.; Rauwerda, H.; Breit, T.M.; Bubak, M.; Hertzberger, L.O.

    2011-01-01

    Recent advances in Internet and grid technologies have greatly enhanced scientific experiments' life cycle. In addition to compute- and data-intensive tasks, large-scale collaborations involving geographically distributed scientists and e-infrastructure are now possible. Scientific workflows, which

  14. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    OpenAIRE

    Karlheinz Schwarz; Rainer Breitling; Christian Allen

    2013-01-01

    Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized ...

  15. Performance analysis of cloud computing services for many-tasks scientific computing

    NARCIS (Netherlands)

    Iosup, A.; Ostermann, S.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.

    2011-01-01

    Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a

  16. 40 CFR 98.418 - Definitions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Definitions. 98.418 Section 98.418 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.418 Definitions. All terms used in...

  17. COMPUTATIONAL THINKING

    Directory of Open Access Journals (Sweden)

    Evgeniy K. Khenner

    2016-01-01

    Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with

  18. Cloud-based opportunities in scientific computing: insights from processing Suomi National Polar-Orbiting Partnership (S-NPP) Direct Broadcast data

    Science.gov (United States)

    Evans, J. D.; Hao, W.; Chettri, S.

    2013-12-01

    The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of

  19. 45 CFR 98.90 - Monitoring.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Monitoring. 98.90 Section 98.90 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Monitoring, Non-compliance and Complaints § 98.90 Monitoring. (a) The Secretary will monitor programs funded under...

  20. 29 CFR 98.1000 - Respondent.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Respondent. 98.1000 Section 98.1000 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.1000 Respondent. Respondent means a person against whom an agency has initiated a debarment or suspension action. ...

  1. 34 CFR 98.6 - Reports.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Reports. 98.6 Section 98.6 Education Office of the Secretary, Department of Education STUDENT RIGHTS IN RESEARCH, EXPERIMENTAL PROGRAMS, AND TESTING § 98.6 Reports. The Secretary may require the recipient to submit reports containing information necessary to...

  2. II - Template Metaprogramming for Massively Parallel Scientific Computing - Vectorization with Expression Templates

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty. These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods i...

  3. Multithreaded transactions in scientific computing. The Growth06_v2 program

    Science.gov (United States)

    Daniluk, Andrzej

    2009-07-01

    efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.

  4. 9 CFR 98.30 - Definitions.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Definitions. 98.30 Section 98.30... EMBRYOS AND ANIMAL SEMEN Certain Animal Semen § 98.30 Definitions. Whenever in this subpart of the... (England, Scotland, Wales, the Isle of Man, and Northern Ireland). Cattle. Animals of the bovine species...

  5. 29 CFR 98.1005 - State.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true State. 98.1005 Section 98.1005 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.1005 State. (a) State means— (1) Any of the states of the United States; (2) The District of Columbia; (3) The...

  6. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  7. 49 CFR 98.10 - Appeal.

    Science.gov (United States)

    2010-10-01

    ... Administration of Enforcement Proceedings § 98.10 Appeal. (a) Within 30 working days after receipt of a decision issued under § 98.8 or § 98.9 of this part, either the Departmental counsel or the former employee may appeal the decision to the Secretary. (b) In making a decision on an appeal, the Secretary shall consider...

  8. 45 CFR 98.14 - Plan process.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Plan process. 98.14 Section 98.14 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND General Application Procedures § 98.14 Plan process. In the development of each Plan, as required pursuant to § 98.17...

  9. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  10. File-System Workload on a Scientific Multiprocessor

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  11. NetiNeti: discovery of scientific names from text using machine learning methods

    Directory of Open Access Journals (Sweden)

    Akella Lakshmi

    2012-08-01

    Full Text Available Abstract Background A scientific name for an organism can be associated with almost all biological data. Name identification is an important step in many text mining tasks aiming to extract useful information from biological, biomedical and biodiversity text sources. A scientific name acts as an important metadata element to link biological information. Results We present NetiNeti (Name Extraction from Textual Information-Name Extraction for Taxonomic Indexing, a machine learning based approach for recognition of scientific names including the discovery of new species names from text that will also handle misspellings, OCR errors and other variations in names. The system generates candidate names using rules for scientific names and applies probabilistic machine learning methods to classify names based on structural features of candidate names and features derived from their contexts. NetiNeti can also disambiguate scientific names from other names using the contextual information. We evaluated NetiNeti on legacy biodiversity texts and biomedical literature (MEDLINE. NetiNeti performs better (precision = 98.9% and recall = 70.5% compared to a popular dictionary based approach (precision = 97.5% and recall = 54.3% on a 600-page biodiversity book that was manually marked by an annotator. On a small set of PubMed Central’s full text articles annotated with scientific names, the precision and recall values are 98.5% and 96.2% respectively. NetiNeti found more than 190,000 unique binomial and trinomial names in more than 1,880,000 PubMed records when used on the full MEDLINE database. NetiNeti also successfully identifies almost all of the new species names mentioned within web pages. Conclusions We present NetiNeti, a machine learning based approach for identification and discovery of scientific names. The system implementing the approach can be accessed at http://namefinding.ubio.org.

  12. Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers

    Science.gov (United States)

    Prybol, Cameron J.; Kurtzer, Gregory M.

    2017-01-01

    Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub’s primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers. PMID:29186161

  13. Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers.

    Directory of Open Access Journals (Sweden)

    Vanessa V Sochat

    Full Text Available Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub's primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers.

  14. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  15. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  16. Cyber warfare building the scientific foundation

    CERN Document Server

    Jajodia, Sushil; Subrahmanian, VS; Swarup, Vipin; Wang, Cliff

    2015-01-01

    This book features a wide spectrum of the latest computer science research relating to cyber warfare, including military and policy dimensions. It is the first book to explore the scientific foundation of cyber warfare and features research from the areas of artificial intelligence, game theory, programming languages, graph theory and more. The high-level approach and emphasis on scientific rigor provides insights on ways to improve cyber warfare defense worldwide. Cyber Warfare: Building the Scientific Foundation targets researchers and practitioners working in cyber security, especially gove

  17. The need for scientific software engineering in the pharmaceutical industry.

    Science.gov (United States)

    Luty, Brock; Rose, Peter W

    2017-03-01

    Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.

  18. The need for scientific software engineering in the pharmaceutical industry

    Science.gov (United States)

    Luty, Brock; Rose, Peter W.

    2017-03-01

    Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.

  19. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  20. Scientific progress report 1980

    International Nuclear Information System (INIS)

    1981-01-01

    The R + D-projects in this field and the infrastructural tasks mentioned are handled in seven working- and two project groups: Computer systems, Numerical and applied mathematics, Software development, Process calculation systems- hardware, Nuclear electronics, measuring- and automatic control technique, Research of component parts and irradiation tests, Central data processing, Processing of process data in the science of medicine, Co-operation in the BERNET-project in the 'Wissenschaftliches Rechenzentrum Berlin (WRB)' (scientific computer center in Berlin). (orig./WB)

  1. Numerical and symbolic scientific computing

    CERN Document Server

    Langer, Ulrich

    2011-01-01

    The book presents the state of the art and results and also includes articles pointing to future developments. Most of the articles center around the theme of linear partial differential equations. Major aspects are fast solvers in elastoplasticity, symbolic analysis for boundary problems, symbolic treatment of operators, computer algebra, and finite element methods, a symbolic approach to finite difference schemes, cylindrical algebraic decomposition and local Fourier analysis, and white noise analysis for stochastic partial differential equations. Further numerical-symbolic topics range from

  2. A performance analysis of EC2 cloud computing services for scientific computing

    NARCIS (Netherlands)

    Ostermann, S.; Iosup, A.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.; Avresky, D.; Diaz, M.; Bode, A.; Bruno, C.; Dekel, E.

    2010-01-01

    Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds

  3. Manual on JSSL (JAERI scientific subroutine library)

    International Nuclear Information System (INIS)

    Fujimura, Toichiro; Nishida, Takahiko; Asai, Kiyoshi

    1977-05-01

    A manual on the revised JAERI scientific subroutine library is presented. The library is a collection of subroutines developed or modified in JAERI which complements the library installed for FACOM 230-75 computer. It is subject to further extension in the future, since the present one is still insufficient for scientific calculations. (auth.)

  4. Annual report 1997-98

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    Research and development and other activities of the various constituent units of the Department of Atomic Energy (DAE) and also of the institutions financially aided by DAE during the year 1997-98 are reported. The various constituent units of DAE consist of nuclear research centres, nuclear power, fuel reprocessing and heavy water plants, nuclear fuel fabrication facilities, electronic and instrumentation production organisations, atomic mineral processing units and other nuclear installations. The activities of DAE cover the whole gamut of nuclear fuel cycle, research and development in nuclear science and reactor technology, applications of radiation and radioisotopes, radiation protection, environmental aspects, research and development in front line areas such as robotics, lasers, parallel processing etc.. These activities are described under the headings: introduction, the year at a glance, nuclear power and associated programmes, research and development programmes-I and II, public sector undertakings, service organisations. Other activities covered include international relations, IX plan formulation, management services, public awareness, computational facilities, library and information services etc.. The report also gives details about the organisational and administrative aspects of the DAE figs., ills.

  5. Annual report 1997-98

    International Nuclear Information System (INIS)

    1998-01-01

    Research and development and other activities of the various constituent units of the Department of Atomic Energy (DAE) and also of the institutions financially aided by DAE during the year 1997-98 are reported. The various constituent units of DAE consist of nuclear research centres, nuclear power, fuel reprocessing and heavy water plants, nuclear fuel fabrication facilities, electronic and instrumentation production organisations, atomic mineral processing units and other nuclear installations. The activities of DAE cover the whole gamut of nuclear fuel cycle, research and development in nuclear science and reactor technology, applications of radiation and radioisotopes, radiation protection, environmental aspects, research and development in front line areas such as robotics, lasers, parallel processing etc.. These activities are described under the headings: introduction, the year at a glance, nuclear power and associated programmes, research and development programmes-I and II, public sector undertakings, service organisations. Other activities covered include international relations, IX plan formulation, management services, public awareness, computational facilities, library and information services etc.. The report also gives details about the organisational and administrative aspects of the DAE

  6. 45 CFR 98.33 - Consumer education.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Consumer education. 98.33 Section 98.33 Public... Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.33 Consumer education... public consumer education information that will promote informed child care choices including, at a...

  7. 45 CFR 98.32 - Parental complaints.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Parental complaints. 98.32 Section 98.32 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.32 Parental complaints...

  8. 22 CFR 9.8 - Classification challenges.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Classification challenges. 9.8 Section 9.8 Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.8 Classification... classification status is improper are expected and encouraged to challenge the classification status of the...

  9. The scaling issue: scientific opportunities

    Science.gov (United States)

    Orbach, Raymond L.

    2009-07-01

    A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.

  10. The scaling issue: scientific opportunities

    International Nuclear Information System (INIS)

    Orbach, Raymond L

    2009-01-01

    A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.

  11. 45 CFR 98.31 - Parental access.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Parental access. 98.31 Section 98.31 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.31 Parental access. The...

  12. Analysis of Scientific Attitude, Computer Anxiety, Educational Internet Use, Problematic Internet Use, and Academic Achievement of Middle School Students According to Demographic Variables

    Science.gov (United States)

    Bekmezci, Mehmet; Celik, Ismail; Sahin, Ismail; Kiray, Ahmet; Akturk, Ahmet Oguz

    2015-01-01

    In this research, students' scientific attitude, computer anxiety, educational use of the Internet, academic achievement, and problematic use of the Internet are analyzed based on different variables (gender, parents' educational level and daily access to the Internet). The research group involves 361 students from two middle schools which are…

  13. Scientific Library Offers New Training Options | Poster

    Science.gov (United States)

    The Scientific Library is expanding its current training opportunities by offering webinars, allowing employees to take advantage of trainings from the comfort of their own offices. Due to the nature of their work, some employees find it inconvenient to attend in-person training classes; others simply prefer to use their own computers. The Scientific Library has been

  14. 29 CFR 98.920 - Civil judgment.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Civil judgment. 98.920 Section 98.920 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.920 Civil judgment. Civil judgment means the disposition of a civil action by any court of competent jurisdiction...

  15. Rapid development of scalable scientific software using a process oriented approach

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Vinter, Brian

    2011-01-01

    Scientific applications are often not written with multiprocessing, cluster computing or grid computing in mind. This paper suggests using Python and PyCSP to structure scientific software through Communicating Sequential Processes. Three scientific applications are used to demonstrate the features...... of PyCSP and how networks of processes may easily be mapped into a visual representation for better understanding of the process workflow. We show that for many sequential solutions, the difficulty in implementing a parallel application is removed. The use of standard multi-threading mechanisms...

  16. 45 CFR 98.30 - Parental choice.

    Science.gov (United States)

    2010-10-01

    ... Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.30 Parental choice. (a... category of care; or (2) Having the effect of limiting parental access to or choice from among such... 45 Public Welfare 1 2010-10-01 2010-10-01 false Parental choice. 98.30 Section 98.30 Public...

  17. 27 CFR 6.98 - Advertising service.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Advertising service. 6.98 Section 6.98 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Exceptions § 6.98 Advertising service. The listing of the names...

  18. A Constructive Induction Approach to Computer Immunology

    Science.gov (United States)

    1999-03-01

    LVM98] Lamont, Gary B., David A. Van Veldhuizen , and Robert E Marmelstein, A Distributed Architecture for a Self-Adaptive Computer Virus...Artificial Intelligence, Herndon, VA, 1995. [MVL98] Marmelstein, Robert E., David A. Van Veldhuizen , and Gary B. Lamont. Modeling & Analysis

  19. The ANS mathematics and computation software standards

    Energy Technology Data Exchange (ETDEWEB)

    Smetana, A. O. [Savannah River National Laboratory, Washington Savannah River Company, Aiken, SC 29808 (United States)

    2006-07-01

    The Mathematics and Computations Div. of the American Nuclear Society sponsors the ANS-10 Standards Subcommittee. This subcommittee, which is part of the ANS Standards Committee, currently maintains three ANSI/ANS software standards. These standards are: Portability of Scientific and Engineering Software, ANS-10.2; Guidelines for the Verification and Validation of Scientific and Engineering Computer Programs for the Nuclear Industry, ANS-10.4; and Accommodating User Needs in Scientific and Engineering Computer Software Development, ANS-10.5. A fourth Standard, Documentation of Computer Software, ANS-10.3, is available as a historical Standard. (authors)

  20. The ANS mathematics and computation software standards

    International Nuclear Information System (INIS)

    Smetana, A. O.

    2006-01-01

    The Mathematics and Computations Div. of the American Nuclear Society sponsors the ANS-10 Standards Subcommittee. This subcommittee, which is part of the ANS Standards Committee, currently maintains three ANSI/ANS software standards. These standards are: Portability of Scientific and Engineering Software, ANS-10.2; Guidelines for the Verification and Validation of Scientific and Engineering Computer Programs for the Nuclear Industry, ANS-10.4; and Accommodating User Needs in Scientific and Engineering Computer Software Development, ANS-10.5. A fourth Standard, Documentation of Computer Software, ANS-10.3, is available as a historical Standard. (authors)

  1. 29 CFR 1915.98 - First aid.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false First aid. 1915.98 Section 1915.98 Labor Regulations...) OCCUPATIONAL SAFETY AND HEALTH STANDARDS FOR SHIPYARD EMPLOYMENT General Working Conditions § 1915.98 First aid...) Unless a first aid room and a qualified attendant are close at hand and prepared to render first aid to...

  2. A survey of computational physics introductory computational science

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2008-01-01

    Computational physics is a rapidly growing subfield of computational science, in large part because computers can solve previously intractable problems or simulate natural processes that do not have analytic solutions. The next step beyond Landau's First Course in Scientific Computing and a follow-up to Landau and Páez's Computational Physics, this text presents a broad survey of key topics in computational physics for advanced undergraduates and beginning graduate students, including new discussions of visualization tools, wavelet analysis, molecular dynamics, and computational fluid dynamics

  3. CAD/CAM and scientific data management at Dassault

    Science.gov (United States)

    Bohn, P.

    1984-01-01

    The history of CAD/CAM and scientific data management at Dassault are presented. Emphasis is put on the targets of the now commercially available software CATIA. The links with scientific computations such as aerodynamics and structural analysis are presented. Comments are made on the principles followed within the company. The consequences of the approximative nature of scientific data are examined. Consequence of the new history function is mainly its protection against copy or alteration. Future plans at Dassault for scientific data appear to be in opposite directions compared to some general tendencies.

  4. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  5. 46 CFR 98.30-7 - Smoking.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Smoking. 98.30-7 Section 98.30-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS SPECIAL CONSTRUCTION, ARRANGEMENT, AND OTHER PROVISIONS FOR CERTAIN DANGEROUS CARGOES IN BULK Portable Tanks § 98.30-7 Smoking. No person may smoke within 50 feet of a...

  6. Computability and unsolvability

    CERN Document Server

    Davis, Martin

    1985-01-01

    ""A clearly written, well-presented survey of an intriguing subject."" - Scientific American. Classic text considers general theory of computability, computable functions, operations on computable functions, Turing machines self-applied, unsolvable decision problems, applications of general theory, mathematical logic, Kleene hierarchy, computable functionals, classification of unsolvable decision problems and more.

  7. Artificial intelligence support for scientific model-building

    Science.gov (United States)

    Keller, Richard M.

    1992-01-01

    Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.

  8. 49 CFR 98.2 - Definitions.

    Science.gov (United States)

    2010-10-01

    ... Administration. (5) The National Highway Traffic Safety Administration. (6) The Urban Mass Transportation... 49 Transportation 1 2010-10-01 2010-10-01 false Definitions. 98.2 Section 98.2 Transportation Office of the Secretary of Transportation ENFORCEMENT OF RESTRICTIONS ON POST-EMPLOYMENT ACTIVITIES...

  9. An Imagination Effect in Learning from Scientific Text

    Science.gov (United States)

    Leopold, Claudia; Mayer, Richard E.

    2015-01-01

    Asking students to imagine the spatial arrangement of the elements in a scientific text constitutes a learning strategy intended to foster deep processing of the instructional material. Two experiments investigated the effects of mental imagery prompts on learning from scientific text. Students read a computer-based text on the human respiratory…

  10. Cloud Computing in Science and Engineering and the “SciShop.ru” Computer Simulation Center

    Directory of Open Access Journals (Sweden)

    E. V. Vorozhtsov

    2011-12-01

    Full Text Available Various aspects of cloud computing applications for scientific research, applied design, and remote education are described in this paper. An analysis of the different aspects is performed based on the experience from the “SciShop.ru” Computer Simulation Center. This analysis shows that cloud computing technology has wide prospects in scientific research applications, applied developments and also remote education of specialists, postgraduates, and students.

  11. 49 CFR 98.1 - Purpose.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Purpose. 98.1 Section 98.1 Transportation Office of the Secretary of Transportation ENFORCEMENT OF RESTRICTIONS ON POST-EMPLOYMENT ACTIVITIES... administrative enforcement procedures that the Department of Transportation will follow when there is an...

  12. Optimal Design of Fixed-Point and Floating-Point Arithmetic Units for Scientific Applications

    OpenAIRE

    Pongyupinpanich, Surapong

    2012-01-01

    The challenge in designing a floating-point arithmetic co-processor/processor for scientific and engineering applications is to improve the performance, efficiency, and computational accuracy of the arithmetic unit. The arithmetic unit should efficiently support several mathematical functions corresponding to scientific and engineering computation demands. Moreover, the computations should be performed as fast as possible with a high degree of accuracy. Thus, this thesis proposes algorithm, d...

  13. Quantum Computing

    OpenAIRE

    Scarani, Valerio

    1998-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  14. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  15. The science of computing shaping a discipline

    CERN Document Server

    Tedre, Matti

    2014-01-01

    The identity of computing has been fiercely debated throughout its short history. Why is it still so hard to define computing as an academic discipline? Is computing a scientific, mathematical, or engineering discipline? By describing the mathematical, engineering, and scientific traditions of computing, The Science of Computing: Shaping a Discipline presents a rich picture of computing from the viewpoints of the field's champions. The book helps readers understand the debates about computing as a discipline. It explains the context of computing's central debates and portrays a broad perspecti

  16. Fourth Thematic CERN School of Computing

    CERN Multimedia

    Alberto Pace, CSC Director

    2016-01-01

    The Fourth Thematic School of Computing (tCSC2016) takes place this year in Split, Croatia, from 22 to 28 May 2016.   The theme is "Efficient and Parallel Processing of Scientific Data", looking at: The challenge of scientific data processing: commonalities, analogies and the main differences between different sciences. Size of scientific software projects. Parallelism and asynchronism: computation and I/O. The School is open to postgraduate students and research workers with a few years' experience in elementary particle physics, computing, engineering or related fields.  All applicants are welcome, including former and future participants in the main CSC summer school. Registration will close on 15 February and participation is limited to 24 students. To register, please go here. About: The Thematic Schools are part of the annual series of CERN Schools of Computing, to promote advanced learning and knowledge exchange on the subject of scientific compu...

  17. [Earth Science Technology Office's Computational Technologies Project

    Science.gov (United States)

    Fischer, James (Technical Monitor); Merkey, Phillip

    2005-01-01

    This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.

  18. Applications of artificial intelligence to scientific research

    Science.gov (United States)

    Prince, Mary Ellen

    1986-01-01

    Artificial intelligence (AI) is a growing field which is just beginning to make an impact on disciplines other than computer science. While a number of military and commercial applications were undertaken in recent years, few attempts were made to apply AI techniques to basic scientific research. There is no inherent reason for the discrepancy. The characteristics of the problem, rather than its domain, determines whether or not it is suitable for an AI approach. Expert system, intelligent tutoring systems, and learning programs are examples of theoretical topics which can be applied to certain areas of scientific research. Further research and experimentation should eventurally make it possible for computers to act as intelligent assistants to scientists.

  19. A primer on scientific programming with Python

    CERN Document Server

    Langtangen, Hans Petter

    2014-01-01

    The book serves as a first introduction to computer programming of scientific applications, using the high-level Python language. The exposition is example and problem-oriented, where the applications are taken from mathematics, numerical calculus, statistics, physics, biology and finance. The book teaches "Matlab-style" and procedural programming as well as object-oriented programming. High school mathematics is a required background and it is advantageous to study classical and numerical one-variable calculus in parallel with reading this book. Besides learning how to program computers, the reader will also learn how to solve mathematical problems, arising in various branches of science and engineering, with the aid of numerical methods and programming. By blending programming, mathematics and scientific applications, the book lays a solid foundation for practicing computational science. From the reviews: Langtangen … does an excellent job of introducing programming as a set of skills in problem solving. ...

  20. A primer on scientific programming with Python

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    The book serves as a first introduction to computer programming of scientific applications, using the high-level Python language. The exposition is example and problem-oriented, where the applications are taken from mathematics, numerical calculus, statistics, physics, biology and finance. The book teaches "Matlab-style" and procedural programming as well as object-oriented programming. High school mathematics is a required background and it is advantageous to study classical and numerical one-variable calculus in parallel with reading this book. Besides learning how to program computers, the reader will also learn how to solve mathematical problems, arising in various branches of science and engineering, with the aid of numerical methods and programming. By blending programming, mathematics and scientific applications, the book lays a solid foundation for practicing computational science. From the reviews: Langtangen … does an excellent job of introducing programming as a set of skills in problem solving. ...

  1. Computational error and complexity in science and engineering computational error and complexity

    CERN Document Server

    Lakshmikantham, Vangipuram; Chui, Charles K; Chui, Charles K

    2005-01-01

    The book "Computational Error and Complexity in Science and Engineering” pervades all the science and engineering disciplines where computation occurs. Scientific and engineering computation happens to be the interface between the mathematical model/problem and the real world application. One needs to obtain good quality numerical values for any real-world implementation. Just mathematical quantities symbols are of no use to engineers/technologists. Computational complexity of the numerical method to solve the mathematical model, also computed along with the solution, on the other hand, will tell us how much computation/computational effort has been spent to achieve that quality of result. Anyone who wants the specified physical problem to be solved has every right to know the quality of the solution as well as the resources spent for the solution. The computed error as well as the complexity provide the scientific convincing answer to these questions. Specifically some of the disciplines in which the book w...

  2. Computational Science in Armenia (Invited Talk)

    Science.gov (United States)

    Marandjian, H.; Shoukourian, Yu.

    This survey is devoted to the development of informatics and computer science in Armenia. The results in theoretical computer science (algebraic models, solutions to systems of general form recursive equations, the methods of coding theory, pattern recognition and image processing), constitute the theoretical basis for developing problem-solving-oriented environments. As examples can be mentioned: a synthesizer of optimized distributed recursive programs, software tools for cluster-oriented implementations of two-dimensional cellular automata, a grid-aware web interface with advanced service trading for linear algebra calculations. In the direction of solving scientific problems that require high-performance computing resources, examples of completed projects include the field of physics (parallel computing of complex quantum systems), astrophysics (Armenian virtual laboratory), biology (molecular dynamics study of human red blood cell membrane), meteorology (implementing and evaluating the Weather Research and Forecast Model for the territory of Armenia). The overview also notes that the Institute for Informatics and Automation Problems of the National Academy of Sciences of Armenia has established a scientific and educational infrastructure, uniting computing clusters of scientific and educational institutions of the country and provides the scientific community with access to local and international computational resources, that is a strong support for computational science in Armenia.

  3. Measuring scientific reasoning through behavioral analysis in a computer-based problem solving exercise

    Science.gov (United States)

    Mead, C.; Horodyskyj, L.; Buxner, S.; Semken, S. C.; Anbar, A. D.

    2016-12-01

    Developing scientific reasoning skills is a common learning objective for general-education science courses. However, effective assessments for such skills typically involve open-ended questions or tasks, which must be hand-scored and may not be usable online. Using computer-based learning environments, reasoning can be assessed automatically by analyzing student actions within the learning environment. We describe such an assessment under development and present pilot results. In our content-neutral instrument, students solve a problem by collecting and interpreting data in a logical, systematic manner. We then infer reasoning skill automatically based on student actions. Specifically, students investigate why Earth has seasons, a scientifically simple but commonly misunderstood topic. Students are given three possible explanations and asked to select a set of locations on a world map from which to collect temperature data. They then explain how the data support or refute each explanation. The best approaches will use locations in both the Northern and Southern hemispheres to argue that the contrasting seasonality of the hemispheres supports only the correct explanation. We administered a pilot version to students at the beginning of an online, introductory science course (n = 223) as an optional extra credit exercise. We were able to categorize students' data collection decisions as more and less logically sound. Students who choose the most logical measurement locations earned higher course grades, but not significantly higher. This result is encouraging, but not definitive. In the future, we will clarify our results in two ways. First, we plan to incorporate more open-ended interactions into the assessment to improve the resolving power of this tool. Second, to avoid relying on course grades, we will independently measure reasoning skill with one of the existing hand-scored assessments (e.g., Critical Thinking Assessment Test) to cross-validate our new

  4. III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty. These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods i...

  5. 40 CFR 98.413 - Calculating GHG emissions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Calculating GHG emissions. 98.413 Section 98.413 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.413 Calculating...

  6. 40 CFR 98.416 - Data reporting requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data reporting requirements. 98.416 Section 98.416 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.416 Data...

  7. XML Based Scientific Data Management Facility

    Science.gov (United States)

    Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  8. The role of dedicated data computing centers in the age of cloud computing

    Science.gov (United States)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  9. Facilitating Preschoolers' Scientific Knowledge Construction via Computer Games Regarding Light and Shadow: The Effect of the Prediction-Observation-Explanation (POE) Strategy

    Science.gov (United States)

    Hsu, Chung-Yuan; Tsai, Chin-Chung; Liang, Jyh-Chong

    2011-10-01

    Educational researchers have suggested that computer games have a profound influence on students' motivation, knowledge construction, and learning performance, but little empirical research has targeted preschoolers. Thus, the purpose of the present study was to investigate the effects of implementing a computer game that integrates the prediction-observation-explanation (POE) strategy (White and Gunstone in Probing understanding. Routledge, New York, 1992) on facilitating preschoolers' acquisition of scientific concepts regarding light and shadow. The children's alternative conceptions were explored as well. Fifty participants were randomly assigned into either an experimental group that played a computer game integrating the POE model or a control group that played a non-POE computer game. By assessing the students' conceptual understanding through interviews, this study revealed that the students in the experimental group significantly outperformed their counterparts in the concepts regarding "shadow formation in daylight" and "shadow orientation." However, children in both groups, after playing the games, still expressed some alternative conceptions such as "Shadows always appear behind a person" and "Shadows should be on the same side as the sun."

  10. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2002-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  11. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2001-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  12. THE CENTER FOR DATA INTENSIVE COMPUTING

    International Nuclear Information System (INIS)

    GLIMM, J.

    2001-01-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook

  13. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2003-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  14. A method to mine workflows from provenance for assisting scientific workflow composition

    NARCIS (Netherlands)

    Zeng, R.; He, X.; Aalst, van der W.M.P.

    2011-01-01

    Scientific workflows have recently emerged as a new paradigm for representing and managing complex distributed scientific computations and are used to accelerate the pace of scientific discovery. In many disciplines, individual workflows are large and complicated due to the large quantities of data

  15. Introduction to scientific computing and data analysis

    CERN Document Server

    Holmes, Mark H

    2016-01-01

    This textbook provides and introduction to numerical computing and its applications in science and engineering. The topics covered include those usually found in an introductory course, as well as those that arise in data analysis. This includes optimization and regression based methods using a singular value decomposition. The emphasis is on problem solving, and there are numerous exercises throughout the text concerning applications in engineering and science. The essential role of the mathematical theory underlying the methods is also considered, both for understanding how the method works, as well as how the error in the computation depends on the method being used. The MATLAB codes used to produce most of the figures and data tables in the text are available on the author’s website and SpringerLink.

  16. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  17. 40 CFR 98.406 - Data reporting requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data reporting requirements. 98.406 Section 98.406 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.406 Data...

  18. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  19. The role of data in scientific progress

    International Nuclear Information System (INIS)

    Glaeser, P.S.

    1985-01-01

    This book contains 109 papers presented at the 9th Int. CODATA Conference and illustrates two main themes (1) new computer-based methods for storing, manipulating and disseminating scientific and technical data, and (2) the use of such computerized data files to give new scientific insights. The broad range of scientific disciplines covered includes geology and geochemistry, oceanography and ecology, molecular biology and biotechnology, chemical engineering, materials properties, energy systems, data base design and management - theory and practice, and finally, a last section on data retrieval and library systems. 12 items are included in Atomindex separately. (Auth.)

  20. 40 CFR 98.38 - Definitions.

    Science.gov (United States)

    2010-07-01

    ... Still Gas 0.143 66.72 Kerosene 0.135 75.20 Liquefied petroleum gases (LPG) 0.092 62.98 Propane 0.091 61... GREENHOUSE GAS REPORTING General Stationary Fuel Combustion Sources § 98.38 Definitions. All terms used in... 93.65 Mixed (Industrial sector) 22.35 93.91 Mixed (Electric Power sector) 19.73 94.38 Natural gas mm...

  1. Applications of computer algebra

    CERN Document Server

    1985-01-01

    Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in­ creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa­ tion and obtains a closed for...

  2. A Scientific Calculator for Exact Real Number Computation Based on LRT, GMP and FC++

    Directory of Open Access Journals (Sweden)

    J. A. Hernández

    2012-03-01

    Full Text Available Language for Redundant Test (LRT is a programming language for exact real number computation. Its lazy evaluation mechanism (also called call-by-need and its infinite list requirement, make the language appropriate to be implemented in a functional programming language such as Haskell. However, a direction translation of the operational semantics of LRT into Haskell as well as the algorithms to implement basic operations (addition subtraction, multiplication, division and trigonometric functions (sin, cosine, tangent, etc. makes the resulting scientific calculator time consuming and so inefficient. In this paper, we present an alternative implementation of the scientific calculator using FC++ and GMP. FC++ is a functional C++ library while GMP is a GNU multiple presicion library. We show that a direct translation of LRT in FC++ results in a faster scientific calculator than the one presented in Haskell.El lenguaje de verificación redundante (LRT, por sus siglas en inglés es un lenguaje de programación para el cómputo con números reales exactos. Su método de evaluación lazy (o mejor conocido como llamada por necesidad y el manejo de listas infinitas requerido, hace que el lenguaje sea apropiado para su implementación en un lenguaje funcional como Haskell. Sin embargo, la implementación directa de la semántica operacional de LRT en Haskell así como los algoritmos para funciones básicas (suma, resta, multiplicación y división y funciones trigonométricas (seno, coseno, tangente, etc hace que la calculadora científica resultante sea ineficiente. En este artículo, presentamos una implementación alternativa de la calculadora científica usando FC++ y GMP. FC++ es una librería que utiliza el paradigma Funcional en C++ mientras que GMP es una librería GNU de múltiple precisión. En el artículo mostramos que la implementación directa de LRT en FC++ resulta en una librería más eficiente que la implementada en Haskell.

  3. 40 CFR 98.412 - GHGs to report.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false GHGs to report. 98.412 Section 98.412 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.412 GHGs to report. You must report...

  4. 40 CFR 98.411 - Reporting threshold.

    Science.gov (United States)

    2010-07-01

    ...) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.411 Reporting threshold. Any supplier of industrial greenhouse gases who meets the requirements of § 98.2(a)(4) must report GHG...

  5. 46 CFR 98.25-65 - Filling density.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Filling density. 98.25-65 Section 98.25-65 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS SPECIAL... § 98.25-65 Filling density. (a) The filling density, or the percent ratio of the liquefied gas that may...

  6. Science Prospects And Benefits with Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kothe, Douglas B [ORNL

    2007-12-01

    Scientific computation has come into its own as a mature technology in all fields of science. Never before have we been able to accurately anticipate, analyze, and plan for complex events that have not yet occurred from the operation of a reactor running at 100 million degrees centigrade to the changing climate a century down the road. Combined with the more traditional approaches of theory and experiment, scientific computation provides a profound tool for insight and solution as we look at complex systems containing billions of components. Nevertheless, it cannot yet do all we would like. Much of scientific computation s potential remains untapped in areas such as materials science, Earth science, energy assurance, fundamental science, biology and medicine, engineering design, and national security because the scientific challenges are far too enormous and complex for the computational resources at hand. Many of these challenges are of immediate global importance. These challenges can be overcome by a revolution in computing that promises real advancement at a greatly accelerated pace. Planned petascale systems (capable of a petaflop, or 1015 floating point operations per second) in the next 3 years and exascale systems (capable of an exaflop, or 1018 floating point operations per second) in the next decade will provide an unprecedented opportunity to attack these global challenges through modeling and simulation. Exascale computers, with a processing capability similar to that of the human brain, will enable the unraveling of longstanding scientific mysteries and present new opportunities. Table ES.1 summarizes these scientific opportunities, their key application areas, and the goals and associated benefits that would result from solutions afforded by exascale computing.

  7. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  8. Are there ways to improve the citations of a scientific paper?

    Science.gov (United States)

    Bartoli, E

    2018-04-01

    The expansion of scientific publications makes more difficult the mining of relevant information necessary for a productive appraisal of authors' work, scientific interaction and exchange. Papers over-citation is unproductive, under-citation of innovative research generates delay and inefficiency. The extraction of information from the literature is mostly based on keywords (KWs). I computed the chances of citation of one paper as 1/number of papers retrieved by the KWs published in that paper, and compared them with the chances obtained by selecting different KWs. Using generic KWs the chances are smaller, at times practically nil, as compared to those calculated with more specific KWs and their association. Selecting scientifically-analyzed KWs, specific for the message conveyed by a paper and computing beforehand the theoretical chances of citation, might increase the citations obtained, the retrieval of important scientific and innovative information, optimize scientific exchange and qualitative productivity. LEARNING POINTS. Copyright © 2018 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  9. A look back: 57 years of scientific computing

    DEFF Research Database (Denmark)

    Wasniewski, Jerzy

    2012-01-01

    This document outlines my 57-year career in computational mathematics, a career that took me from Poland to Canada and finally to Denmark. It of course spans a period in which both hardware and software developed enormously. Along the way I was fortunate to be faced with fascinating technical cha...... challenges and privileged to be able to share them with inspiring colleagues. From the beginning, my work to a great extent was concerned, directly or indirectly, with computational linear algebra, an interest I maintain even today....

  10. Earth observation scientific workflows in a distributed computing environment

    CSIR Research Space (South Africa)

    Van Zyl, TL

    2011-09-01

    Full Text Available capabilities has focused on the web services approach as exemplified by the OGC's Web Processing Service and by GRID computing. The approach to leveraging distributed computing resources described in this paper uses instead remote objects via RPy...

  11. Language interoperability for high-performance parallel scientific components

    International Nuclear Information System (INIS)

    Elliot, N; Kohn, S; Smolinski, B

    1999-01-01

    With the increasing complexity and interdisciplinary nature of scientific applications, code reuse is becoming increasingly important in scientific computing. One method for facilitating code reuse is the use of components technologies, which have been used widely in industry. However, components have only recently worked their way into scientific computing. Language interoperability is an important underlying technology for these component architectures. In this paper, we present an approach to language interoperability for a high-performance parallel, component architecture being developed by the Common Component Architecture (CCA) group. Our approach is based on Interface Definition Language (IDL) techniques. We have developed a Scientific Interface Definition Language (SIDL), as well as bindings to C and Fortran. We have also developed a SIDL compiler and run-time library support for reference counting, reflection, object management, and exception handling (Babel). Results from using Babel to call a standard numerical solver library (written in C) from C and Fortran show that the cost of using Babel is minimal, where as the savings in development time and the benefits of object-oriented development support for C and Fortran far outweigh the costs

  12. Conditional deletion of CD98hc inhibits osteoclast development

    Directory of Open Access Journals (Sweden)

    Hideki Tsumura

    2016-03-01

    Full Text Available The CD98 heavy chain (CD98hc regulates virus-induced cell fusion and monocyte fusion, and is involved in amino acid transportation. Here, we examined the role that CD98hc plays in the formation of osteoclasts using CD98hcflox/floxLysM-cre peritoneal macrophages (CD98hc-defect macrophages. Peritoneal macrophages were stimulated with co-cultured with osteoblasts in the presence of 1,25(OH2 vitamin D3, and thereafter stained with tartrate-resistant acid phosphatase staining solution. The multinucleated osteoclast formation was severely impaired in the peritoneal macrophages isolated from the CD98hc-defect mice compared with those from wild-type mice. CD98hc mediates integrin signaling and amino acid transport through the CD98 light chain (CD98lc. In integrin signaling, suppression of the M-CSF-RANKL-induced phosphorylation of ERK, Akt, JNK and p130Cas were observed at the triggering phase in the CD98h-defect peritoneal macrophages. Moreover, we showed that the general control non-derepressible (GCN pathway, which was activated by amino acid starvation, was induced by the CD98hc-defect peritoneal macrophages stimulated with RANKL. These results indicate that CD98 plays two important roles in osteoclast formation through integrin signaling and amino acid transport.

  13. 46 CFR 98.30-11 - Cargo pumps.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Cargo pumps. 98.30-11 Section 98.30-11 Shipping COAST..., ARRANGEMENT, AND OTHER PROVISIONS FOR CERTAIN DANGEROUS CARGOES IN BULK Portable Tanks § 98.30-11 Cargo pumps. No person may operate a cargo pump to transfer a product to or from a portable tank unless the pump...

  14. Computer Assisted Instructional Design for Computer-Based Instruction. Final Report. Working Papers.

    Science.gov (United States)

    Russell, Daniel M.; Pirolli, Peter

    Recent advances in artificial intelligence and the cognitive sciences have made it possible to develop successful intelligent computer-aided instructional systems for technical and scientific training. In addition, computer-aided design (CAD) environments that support the rapid development of such computer-based instruction have also been recently…

  15. Computer graphics and research projects

    International Nuclear Information System (INIS)

    Ingtrakul, P.

    1994-01-01

    This report was prepared as an account of scientific visualization tools and application tools for scientists and engineers. It is provided a set of tools to create pictures and to interact with them in natural ways. It applied many techniques of computer graphics and computer animation through a number of full-color presentations as computer animated commercials, 3D computer graphics, dynamic and environmental simulations, scientific modeling and visualization, physically based modelling, and beavioral, skelatal, dynamics, and particle animation. It took in depth at original hardware and limitations of existing PC graphics adapters contain syste m performance, especially with graphics intensive application programs and user interfaces

  16. Manual for JSSL (JAERI scientific subroutine library)

    International Nuclear Information System (INIS)

    Inoue, Shuji; Fujimura, Toichiro; Tsutsui, Tsuneo; Nishida, Takahiko

    1982-09-01

    A manual on revised version of JAERI scientific subroutine library, which is a collection of scientific subroutines developed or modified in JAERI. They are classified into fifteen fields (Special Functions, Linear Problems, Eigenvalue and Eigen vector Problems, Non linear Problems, Mathematical Programming, Extreme Value Problems, Transformations, Functional Approximation Methods, Numerical Differential and Integral Methods, Numerical Differential and Integral Equations, Statistical Functions, Physical Problems, I/O Routines, Plotter Routines, Computer System Functions and Others). Main expansion of this version is in the fields of mathematical programming and statistical functions. The present library may be said to be a comprehensive compilation of scientific subroutines covering almost all the important fields. (author)

  17. An Experimental Investigation of Computer Program Development Approaches and Computer Programming Metrics.

    Science.gov (United States)

    1979-12-01

    8217 S A Air Force Office of Scientific Research/NM , Dccec f7 / Bolling AFB, Washington, DC 20332 N’" - E -E . 150 __" 74 MOJ’jT tRING AGENC NAME A...55t -3 O 2 a. tL loco o.uaZ 4 1 0 0 - .. a .CO acca 4 * ao -- c- - .an za-e~a- 2.5 aa34z98ba a- CHAPTER VII coipletely differentiated outcome is

  18. 36 CFR 9.8 - Use of water.

    Science.gov (United States)

    2010-07-01

    ... MANAGEMENT Mining and Mining Claims § 9.8 Use of water. (a) No operator may use for operations any water from... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Use of water. 9.8 Section 9.8... Regional Director. The Regional Director shall not approve a plan of operations requiring the use of water...

  19. A method to build and analyze scientific workflows from provenance through process mining

    NARCIS (Netherlands)

    Zeng, R.; He, X.; Li, Jiafei; Liu, Zheng; Aalst, van der W.M.P.

    2011-01-01

    Scientific workflows have recently emerged as a new paradigm for representing and managing complex distributed scientific computations and are used to accelerate the pace of scientific discovery. In many disciplines, individual workflows are large due to the large quantities of data used. As

  20. Scientific Assistant Virtual Laboratory (SAVL)

    Science.gov (United States)

    Alaghband, Gita; Fardi, Hamid; Gnabasik, David

    2007-03-01

    The Scientific Assistant Virtual Laboratory (SAVL) is a scientific discovery environment, an interactive simulated virtual laboratory, for learning physics and mathematics. The purpose of this computer-assisted intervention is to improve middle and high school student interest, insight and scores in physics and mathematics. SAVL develops scientific and mathematical imagination in a visual, symbolic, and experimental simulation environment. It directly addresses the issues of scientific and technological competency by providing critical thinking training through integrated modules. This on-going research provides a virtual laboratory environment in which the student directs the building of the experiment rather than observing a packaged simulation. SAVL: * Engages the persistent interest of young minds in physics and math by visually linking simulation objects and events with mathematical relations. * Teaches integrated concepts by the hands-on exploration and focused visualization of classic physics experiments within software. * Systematically and uniformly assesses and scores students by their ability to answer their own questions within the context of a Master Question Network. We will demonstrate how the Master Question Network uses polymorphic interfaces and C# lambda expressions to manage simulation objects.

  1. Advanced Excel for scientific data analysis

    CERN Document Server

    De Levie, Robert

    2004-01-01

    Excel is by far the most widely distributed data analysis software but few users are aware of its full powers. Advanced Excel For Scientific Data Analysis takes off from where most books dealing with scientific applications of Excel end. It focuses on three areas-least squares, Fourier transformation, and digital simulation-and illustrates these with extensive examples, often taken from the literature. It also includes and describes a number of sample macros and functions to facilitate common data analysis tasks. These macros and functions are provided in uncompiled, computer-readable, easily

  2. FY 1998 Scientific and Technical Reports, Articles, Papers, and Presentations

    Science.gov (United States)

    Waits, J. E. Turner (Compiler)

    1999-01-01

    This document presents formal NASA technical reports, papers published in technical journals, and presentations by MSFC (Marshall Space Flight Center) personnel in FY98. It also includes papers of MSFC contractors. After being announced in STAR, all of the NASA series reports may be obtained from the National Technical Information Service. The information in this report may be of value to the scientific and engineering community in determining what information has been published and what is available.

  3. 14 CFR 61.98 - Flight proficiency.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Flight proficiency. 61.98 Section 61.98 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN...) Navigation; (viii) Slow flight and stalls; (ix) Emergency operations; and (x) Postflight procedures. (2) For...

  4. Proceedings of the workshop on severe accident research held in Japan (SARJ-98)

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, Jun [ed.

    1999-07-01

    The Workshop on Severe Accident Research held in Japan (SARJ-98) was taken place at Hotel Lungwood on November 4-6, 1998, and attended by 181 participants from 13 countries. The 63 papers, which cover wide areas of severe accident research both in experiments and analyses, such as in-vessel melt retention, fuel-coolant interaction, fission products behavior, structural integrity, containment behavior, computer simulations, and accident management, are indexed individually. (J.P.N.)

  5. Modern Trends Of Computation, Simulation, and Communication, And Their Impacts On The Progress Of Scientific And Engineering Research, Development, And Education

    International Nuclear Information System (INIS)

    Bunjamin, Muhammad

    2001-01-01

    A short report on the modern trends of computation, simulation, and communication in the 1990s is presented, along with their impacts on the progress of scientific and engineering research, development, and education. A full description of this giant issue is certainly a m ission impossible f or the author. Nevertheless, it is the author's hope that it will at least give an overall view about what is going on in this very dynamic field in the advanced countries. After t hinking globally t hru reading this report, we should then decide on w hat and how to act locally t o respond to these global trends. The main source of information reported here were the computational science and engineering journals and books issued during the 1990s as listed in the references below

  6. The Indigenous World, 1997-98 = El Mundo Indigena, 1997-98.

    Science.gov (United States)

    Erni, Christian, Ed.

    This annual publication (published separately in English and Spanish) examines political, legal, social, environmental, and educational issues concerning indigenous peoples around the world during 1997-98. Part I highlights news events and ongoing situations in specific countries in nine world regions. In the Arctic and North America, these…

  7. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT,J.

    2004-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security.

  8. 42 CFR 460.98 - Service delivery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Service delivery. 460.98 Section 460.98 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED..., national origin, religion, sex, age, sexual orientation, mental or physical disability, or source of...

  9. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Hack, James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Riley, Katherine [Argonne National Lab., IL (United States). Argonne Leadership Computing Facility (ALCF); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States). Argonne Leadership Computing Facility (ALCF); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). ESnet; Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Bard, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Monga, Inder [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). ESnet; Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States). Argonne Leadership Computing Facility; Rotman, Lauren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). ESnet

    2018-01-22

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, and deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain

  10. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex

  11. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    International Nuclear Information System (INIS)

    Brown, D.L.

    2009-01-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  12. Reversible computing fundamentals, quantum computing, and applications

    CERN Document Server

    De Vos, Alexis

    2010-01-01

    Written by one of the few top internationally recognized experts in the field, this book concentrates on those topics that will remain fundamental, such as low power computing, reversible programming languages, and applications in thermodynamics. It describes reversible computing from various points of view: Boolean algebra, group theory, logic circuits, low-power electronics, communication, software, quantum computing. It is this multidisciplinary approach that makes it unique.Backed by numerous examples, this is useful for all levels of the scientific and academic community, from undergr

  13. Predicting future discoveries from current scientific literature.

    Science.gov (United States)

    Petrič, Ingrid; Cestnik, Bojan

    2014-01-01

    Knowledge discovery in biomedicine is a time-consuming process starting from the basic research, through preclinical testing, towards possible clinical applications. Crossing of conceptual boundaries is often needed for groundbreaking biomedical research that generates highly inventive discoveries. We demonstrate the ability of a creative literature mining method to advance valuable new discoveries based on rare ideas from existing literature. When emerging ideas from scientific literature are put together as fragments of knowledge in a systematic way, they may lead to original, sometimes surprising, research findings. If enough scientific evidence is already published for the association of such findings, they can be considered as scientific hypotheses. In this chapter, we describe a method for the computer-aided generation of such hypotheses based on the existing scientific literature. Our literature-based discovery of NF-kappaB with its possible connections to autism was recently approved by scientific community, which confirms the ability of our literature mining methodology to accelerate future discoveries based on rare ideas from existing literature.

  14. Are Cloud Environments Ready for Scientific Applications?

    Science.gov (United States)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to

  15. Biomedical ontologies: toward scientific debate.

    Science.gov (United States)

    Maojo, V; Crespo, J; García-Remesal, M; de la Iglesia, D; Perez-Rey, D; Kulikowski, C

    2011-01-01

    Biomedical ontologies have been very successful in structuring knowledge for many different applications, receiving widespread praise for their utility and potential. Yet, the role of computational ontologies in scientific research, as opposed to knowledge management applications, has not been extensively discussed. We aim to stimulate further discussion on the advantages and challenges presented by biomedical ontologies from a scientific perspective. We review various aspects of biomedical ontologies going beyond their practical successes, and focus on some key scientific questions in two ways. First, we analyze and discuss current approaches to improve biomedical ontologies that are based largely on classical, Aristotelian ontological models of reality. Second, we raise various open questions about biomedical ontologies that require further research, analyzing in more detail those related to visual reasoning and spatial ontologies. We outline significant scientific issues that biomedical ontologies should consider, beyond current efforts of building practical consensus between them. For spatial ontologies, we suggest an approach for building "morphospatial" taxonomies, as an example that could stimulate research on fundamental open issues for biomedical ontologies. Analysis of a large number of problems with biomedical ontologies suggests that the field is very much open to alternative interpretations of current work, and in need of scientific debate and discussion that can lead to new ideas and research directions.

  16. Study on availability of GPU for scientific and engineering calculations

    International Nuclear Information System (INIS)

    Sakamoto, Kensaku; Kobayashi, Seiji

    2009-07-01

    Recently, the number of scientific and engineering calculations on GPUs (Graphic Processing Units) is increasing. It is said that GPUs have much higher peak floating-point processing power and memory bandwidth than CPUs (Central Processing Units). We have studied the effectiveness of GPUs by applying them to fundamental scientific and engineering calculations with CUDA (Compute Unified Device Architecture) development tools. The results have shown as follows: 1) Computations on GPUs are effective for such calculations as matrix operation, FFT (Fast Fourier Transform) and CFD (Computational Fluid Dynamics) in nuclear research region. 2) Highly-advanced programming is required for bringing out high performance of GPUs. 3) Double-precision performance is low and ECC (Error Correction Code) in graphic memory systems supports are lacking. (author)

  17. Techniques and tools for measuring energy efficiency of scientific software applications

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Knight, Robert; Niemi, Tapio; Nurminen, Jukka K.; Nyback, Filip; Pestana, Goncalo; Ou, Zhonghong; Khan, Kashif

    2014-01-01

    The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running o...

  18. 29 CFR 1910.98 - Effective dates.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Effective dates. 1910.98 Section 1910.98 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... section, if any standard in 41 CFR part 50-204, other than a national consensus standard incorporated by...

  19. 46 CFR 98.25-75 - Ventilation.

    Science.gov (United States)

    2010-10-01

    ... shall be fitted with efficient natural or mechanical ventilation. (b) Enclosed compartments in which... 46 Shipping 4 2010-10-01 2010-10-01 false Ventilation. 98.25-75 Section 98.25-75 Shipping COAST... Ventilation. (a) All enclosed spaces containing cargo tanks fitted with bottom outlet connections shall be...

  20. 50 CFR 665.98 - Management area.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Management area. 665.98 Section 665.98 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION... Management area. The American Samoa fishery management area is the EEZ seaward of the Territory of American...

  1. Computational chemistry in pharmaceutical research: at the crossroads.

    Science.gov (United States)

    Bajorath, Jürgen

    2012-01-01

    Computational approaches are an integral part of pharmaceutical research. However, there are many of unsolved key questions that limit the scientific progress in the still evolving computational field and its impact on drug discovery. Importantly, a number of these questions are not new but date back many years. Hence, it might be difficult to conclusively answer them in the foreseeable future. Moreover, the computational field as a whole is characterized by a high degree of heterogeneity and so is, unfortunately, the quality of its scientific output. In light of this situation, it is proposed that changes in scientific standards and culture should be seriously considered now in order to lay a foundation for future progress in computational research.

  2. 7 CFR 98.4 - Analytical methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  3. Introduction to computers: Reference guide

    Energy Technology Data Exchange (ETDEWEB)

    Ligon, F.V.

    1995-04-01

    The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

  4. Visualization and Interaction in Research, Teaching, and Scientific Communication

    Science.gov (United States)

    Ammon, C. J.

    2017-12-01

    Modern computing provides many tools for exploring observations, numerical calculations, and theoretical relationships. The number of options is, in fact, almost overwhelming. But the choices provide those with modest programming skills opportunities to create unique views of scientific information and to develop deeper insights into their data, their computations, and the underlying theoretical data-model relationships. I present simple examples of using animation and human-computer interaction to explore scientific data and scientific-analysis approaches. I illustrate how valuable a little programming ability can free scientists from the constraints of existing tools and can facilitate the development of deeper appreciation data and models. I present examples from a suite of programming languages ranging from C to JavaScript including the Wolfram Language. JavaScript is valuable for sharing tools and insight (hopefully) with others because it is integrated into one of the most powerful communication tools in human history, the web browser. Although too much of that power is often spent on distracting advertisements, the underlying computation and graphics engines are efficient, flexible, and almost universally available in desktop and mobile computing platforms. Many are working to fulfill the browser's potential to become the most effective tool for interactive study. Open-source frameworks for visualizing everything from algorithms to data are available, but advance rapidly. One strategy for dealing with swiftly changing tools is to adopt common, open data formats that are easily adapted (often by framework or tool developers). I illustrate the use of animation and interaction in research and teaching with examples from earthquake seismology.

  5. The digital computer

    CERN Document Server

    Parton, K C

    2014-01-01

    The Digital Computer focuses on the principles, methodologies, and applications of the digital computer. The publication takes a look at the basic concepts involved in using a digital computer, simple autocode examples, and examples of working advanced design programs. Discussions focus on transformer design synthesis program, machine design analysis program, solution of standard quadratic equations, harmonic analysis, elementary wage calculation, and scientific calculations. The manuscript then examines commercial and automatic programming, how computers work, and the components of a computer

  6. 46 CFR 98.33-11 - Smoking.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Smoking. 98.33-11 Section 98.33-11 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS SPECIAL CONSTRUCTION, ARRANGEMENT, AND OTHER PROVISIONS FOR CERTAIN DANGEROUS CARGOES IN BULK Portable Tanks for Certain Grade E Combustible Liquids and Other Regulated...

  7. 45 CFR 98.34 - Parental rights and responsibilities.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Parental rights and responsibilities. 98.34 Section 98.34 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.34...

  8. 45 CFR 98.65 - Audits and financial reporting.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Audits and financial reporting. 98.65 Section 98... DEVELOPMENT FUND Financial Management § 98.65 Audits and financial reporting. (a) Each Lead Agency shall have... independent standards. (g) The Secretary shall require financial reports as necessary. ...

  9. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    Science.gov (United States)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII

  10. 24 October 2014 - President of the Republic of Ecuador R. Correa Delgado signing the guest book with Vice President L. Moreno and Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Guillaume, Jeanneret

    2014-01-01

    visiting the ATLAS experimental cavern with Collaboration PSokesperson D. Charlton and ATLAS User F. Monticelli; throughout accompanied by Adviser for Ecuador J. Salicio Diez and Director for Research and Scientific Computing S. Bertolucci.

  11. Scientific activities 1980 Nuclear Research Center ''Democritos''

    International Nuclear Information System (INIS)

    1982-01-01

    The scientific activities and achievements of the Nuclear Research Center Democritos for the year 1980 are presented in the form of a list of 76 projects giving title, objectives, responsible of each project, developed activities and the pertaining lists of publications. The 16 chapters of this work cover the activities of the main Divisions of the Democritos NRC: Electronics, Biology, Physics, Chemistry, Health Physics, Reactor, Scientific Directorate, Radioisotopes, Environmental Radioactivity, Soil Science, Computer Center, Uranium Exploration, Medical Service, Technological Applications, Radioimmunoassay and Training. (N.C.)

  12. On the impact of quantum computing technology on future developments in high-performance scientific computing

    OpenAIRE

    Möller, Matthias; Vuik, Cornelis

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to researchers and vendors of future computing technologies, national authorities are showing strong interest in maturing this technology due to its known potential to break many of today’s encryption technique...

  13. Computational science: Emerging opportunities and challenges

    International Nuclear Information System (INIS)

    Hendrickson, Bruce

    2009-01-01

    In the past two decades, computational methods have emerged as an essential component of the scientific and engineering enterprise. A diverse assortment of scientific applications has been simulated and explored via advanced computational techniques. Computer vendors have built enormous parallel machines to support these activities, and the research community has developed new algorithms and codes, and agreed on standards to facilitate ever more ambitious computations. However, this track record of success will be increasingly hard to sustain in coming years. Power limitations constrain processor clock speeds, so further performance improvements will need to come from ever more parallelism. This higher degree of parallelism will require new thinking about algorithms, programming models, and architectural resilience. Simultaneously, cutting edge science increasingly requires more complex simulations with unstructured and adaptive grids, and multi-scale and multi-physics phenomena. These new codes will push existing parallelization strategies to their limits and beyond. Emerging data-rich scientific applications are also in need of high performance computing, but their complex spatial and temporal data access patterns do not perform well on existing machines. These interacting forces will reshape high performance computing in the coming years.

  14. 78 FR 11896 - Center for Scientific Review; Notice of Closed Meetings

    Science.gov (United States)

    2013-02-20

    ... Scientific Review Special Emphasis Panel; Small Business: Cell, Computational, and Molecular Biology. Date... Research in Diabetes, Obesity and Endocrinology Disorders. Date: March 13, 2013. Time: 8:00 a.m. to 6:00 p... Scientific Review Special Emphasis Panel; Member Conflict: Diabetes, Metabolism and Obesity. Date: March 13...

  15. Scientific activities 1979

    International Nuclear Information System (INIS)

    1981-01-01

    The scientific activities and achievements of the Nuclear Research Center Democritus for the year 1979 are presented in the form of a list of 78 projects giving title, objectives, commencement year, responsible of each project, developed activities and the pertaining lists of publications. The 15 chapters of this work cover the activities of the main Divisions of the Democritus NRC: Electronics, Biology, Physics, Chemistry, Health Physics, Reactor, Radioisotopes, Environmental Radioactivity, Soil Science, Computer Center, Uranium Exploration, Medical Service, Technological Applications and Training. (T.A.)

  16. Nucleoporin Nup98 mediates galectin-3 nuclear-cytoplasmic trafficking

    Energy Technology Data Exchange (ETDEWEB)

    Funasaka, Tatsuyoshi, E-mail: funasaka@staff.kanazawa-u.ac.jp [Laboratory of Molecular and Cellular Biology, Department of Biology, Faculty of Natural Systems, Institute of Science and Engineering, Kanazawa University, Ishikawa (Japan); Balan, Vitaly; Raz, Avraham [Department of Oncology and Pathology, Karmanos Cancer Institute, Wayne State University, School of Medicine, Detroit, MI (United States); Wong, Richard W., E-mail: rwong@staff.kanazawa-u.ac.jp [Laboratory of Molecular and Cellular Biology, Department of Biology, Faculty of Natural Systems, Institute of Science and Engineering, Kanazawa University, Ishikawa (Japan); Bio-AFM Frontier Research Center, Kanazawa Kanazawa University, Ishikawa (Japan)

    2013-04-26

    Highlights: •Nuclear pore protein Nup98 is a novel binding partner of galectin-3. •Nup98 transports galectin-3 into cytoplasm. •Nup98 depletion leads to galectin-3 nuclear transport and induces growth retardation. •Nup98 may involve in ß-catenin pathway through interaction with galectin-3. -- Abstract: Nucleoporin Nup98 is a component of the nuclear pore complex, and is important in transport across the nuclear pore. Many studies implicate nucleoporin in cancer progression, but no direct mechanistic studies of its effect in cancer have been reported. We show here that Nup98 specifically regulates nucleus–cytoplasm transport of galectin-3, which is a ß-galactoside-binding protein that affects adhesion, migration, and cancer progression, and controls cell growth through the ß-catenin signaling pathway in cancer cells. Nup98 interacted with galectin-3 on the nuclear membrane, and promoted galectin-3 cytoplasmic translocation whereas other nucleoporins did not show these functions. Inversely, silencing of Nup98 expression by siRNA technique localized galectin-3 to the nucleus and retarded cell growth, which was rescued by Nup98 transfection. In addition, Nup98 RNA interference significantly suppressed downstream mRNA expression in the ß-catenin pathway, such as cyclin D1 and FRA-1, while nuclear galectin-3 binds to ß-catenin to inhibit transcriptional activity. Reduced expression of ß-catenin target genes is consistent with the Nup98 reduction and the galectin-3–nucleus translocation rate. Overall, the results show Nup98’s involvement in nuclear–cytoplasm translocation of galectin-3 and ß-catenin signaling pathway in regulating cell proliferation, and the results depicted here suggest a novel therapeutic target/modality for cancers.

  17. SciCloud: A Scientific Cloud and Management Platform for Smart City Data

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Nielsen, Per Sieverts; Heller, Alfred

    2017-01-01

    private scientific cloud, SciCloud, to tackle these grand challenges. SciCloud provides on-demand computing resource provisions, a scalable data management platform and an in-place data analytics environment to support the scientific research using smart city data....

  18. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  19. Computational physics problem solving with Python

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2015-01-01

    The use of computation and simulation has become an essential part of the scientific process. Being able to transform a theory into an algorithm requires significant theoretical insight, detailed physical and mathematical understanding, and a working level of competency in programming. This upper-division text provides an unusually broad survey of the topics of modern computational physics from a multidisciplinary, computational science point of view. Its philosophy is rooted in learning by doing (assisted by many model programs), with new scientific materials as well as with the Python progr

  20. Cloud computing with e-science applications

    CERN Document Server

    Terzo, Olivier

    2015-01-01

    The amount of data in everyday life has been exploding. This data increase has been especially significant in scientific fields, where substantial amounts of data must be captured, communicated, aggregated, stored, and analyzed. Cloud Computing with e-Science Applications explains how cloud computing can improve data management in data-heavy fields such as bioinformatics, earth science, and computer science. The book begins with an overview of cloud models supplied by the National Institute of Standards and Technology (NIST), and then:Discusses the challenges imposed by big data on scientific

  1. 30th November 2010 - Norwegian Ministry of Government Administration, Reform and Church Affairs State Secretary R. Valle signing the guest book with Head of International Relations F. Pauss and Director for Research and Scientific Computing S. Bertolucci; visiting CERN Computer Centre with Information Technology Department Head F. Hemmer.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    30th November 2010 - Norwegian Ministry of Government Administration, Reform and Church Affairs State Secretary R. Valle signing the guest book with Head of International Relations F. Pauss and Director for Research and Scientific Computing S. Bertolucci; visiting CERN Computer Centre with Information Technology Department Head F. Hemmer.

  2. Techniques and tools for measuring energy efficiency of scientific software applications

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Niemi, Tapio; Pestana, Gonçalo; Khan, Kashif; Nurminen, Jukka K; Nyback, Filip; Ou, Zhonghong

    2015-01-01

    The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running on ARM and Intel architectures, and compare their power consumption and performance. We leverage several profiling tools (both in hardware and software) to extract different characteristics of the power use. We report the results of these measurements and the experience gained in developing a set of measurement techniques and profiling tools to accurately assess the power consumption for scientific workloads. (paper)

  3. 28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

    CERN Multimedia

    Gadmer, Jean-Claude

    2014-01-01

    28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

  4. Confirmatory Factor Analysis of the Delirium Rating Scale Revised-98 (DRS-R98).

    Science.gov (United States)

    Thurber, Steven; Kishi, Yasuhiro; Trzepacz, Paula T; Franco, Jose G; Meagher, David J; Lee, Yanghyun; Kim, Jeong-Lan; Furlanetto, Leticia M; Negreiros, Daniel; Huang, Ming-Chyi; Chen, Chun-Hsin; Kean, Jacob; Leonard, Maeve

    2015-01-01

    Principal components analysis applied to the Delirium Rating Scale-Revised-98 contributes to understanding the delirium construct. Using a multisite pooled international delirium database, the authors applied confirmatory factor analysis to Delirium Rating Scale-Revised-98 scores from 859 adult patients evaluated by delirium experts (delirium, N=516; nondelirium, N=343). Confirmatory factor analysis found all diagnostic features and core symptoms (cognitive, language, thought process, sleep-wake cycle, motor retardation), except motor agitation, loaded onto factor 1. Motor agitation loaded onto factor 2 with noncore symptoms (delusions, affective lability, and perceptual disturbances). Factor 1 loading supports delirium as a single construct, but when accompanied by psychosis, motor agitation's role may not be solely as a circadian activity indicator.

  5. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  6. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  7. The Magellan Final Report on Cloud Computing

    Energy Technology Data Exchange (ETDEWEB)

    ,; Coghlan, Susan; Yelick, Katherine

    2011-12-21

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

  8. Power-Efficient Computing: Experiences from the COSA Project

    Directory of Open Access Journals (Sweden)

    Daniele Cesini

    2017-01-01

    Full Text Available Energy consumption is today one of the most relevant issues in operating HPC systems for scientific applications. The use of unconventional computing systems is therefore of great interest for several scientific communities looking for a better tradeoff between time-to-solution and energy-to-solution. In this context, the performance assessment of processors with a high ratio of performance per watt is necessary to understand how to realize energy-efficient computing systems for scientific applications, using this class of processors. Computing On SOC Architecture (COSA is a three-year project (2015–2017 funded by the Scientific Commission V of the Italian Institute for Nuclear Physics (INFN, which aims to investigate the performance and the total cost of ownership offered by computing systems based on commodity low-power Systems on Chip (SoCs and high energy-efficient systems based on GP-GPUs. In this work, we present the results of the project analyzing the performance of several scientific applications on several GPU- and SoC-based systems. We also describe the methodology we have used to measure energy performance and the tools we have implemented to monitor the power drained by applications while running.

  9. Tank 241-AP-106, Grab samples, 6AP-98-1, 6AP-98-2 and 6AP-98-3 Analytical results for the final report

    Energy Technology Data Exchange (ETDEWEB)

    FULLER, R.K.

    1999-02-23

    This document is the final report for tank 241-AP-106 grab samples. Three grab samples 6AP-98-1, 6AP-98-2 and 6AP-98-3 were taken from riser 1 of tank 241-AP-106 on May 28, 1998 and received by the 222-S Laboratory on May 28, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) (Sasaki, 1998) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded. The request for sample analysis received for AP-106 indicated that the samples were polychlorinated biphenyl (PCB) suspects. The results of this analysis indicated that no PCBs were present at the Toxic Substance Control Act (TSCA) regulated limit of 50 ppm. The results and raw data for the PCB analysis are included in this document.

  10. Tank 241-AP-106, Grab samples, 6AP-98-1, 6AP-98-2 and 6AP-98-3 Analytical results for the final report

    International Nuclear Information System (INIS)

    FULLER, R.K.

    1999-01-01

    This document is the final report for tank 241-AP-106 grab samples. Three grab samples 6AP-98-1, 6AP-98-2 and 6AP-98-3 were taken from riser 1 of tank 241-AP-106 on May 28, 1998 and received by the 222-S Laboratory on May 28, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) (Sasaki, 1998) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded. The request for sample analysis received for AP-106 indicated that the samples were polychlorinated biphenyl (PCB) suspects. The results of this analysis indicated that no PCBs were present at the Toxic Substance Control Act (TSCA) regulated limit of 50 ppm. The results and raw data for the PCB analysis are included in this document

  11. The philosophy of scientific experimentation: a review

    Science.gov (United States)

    2009-01-01

    Practicing and studying automated experimentation may benefit from philosophical reflection on experimental science in general. This paper reviews the relevant literature and discusses central issues in the philosophy of scientific experimentation. The first two sections present brief accounts of the rise of experimental science and of its philosophical study. The next sections discuss three central issues of scientific experimentation: the scientific and philosophical significance of intervention and production, the relationship between experimental science and technology, and the interactions between experimental and theoretical work. The concluding section identifies three issues for further research: the role of computing and, more specifically, automating, in experimental research, the nature of experimentation in the social and human sciences, and the significance of normative, including ethical, problems in experimental science. PMID:20098589

  12. Energy Smart Management of Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow; Rotem, Dron; Tsao, Shih-Chiang

    2009-04-12

    Scientific data centers comprised of high-powered computing equipment and large capacity disk storage systems consume considerable amount of energy. Dynamic power management techniques (DPM) are commonly used for saving energy in disk systems. These involve powering down disks that exhibit long idle periods and placing them in standby mode. A file request from a disk in standby mode will incur both energy and performance penalties as it takes energy (and time) to spin up the disk before it can serve a file. For this reason, DPM has to make decisions as to when to transition the disk into standby mode such that the energy saved is greater than the energy needed to spin it up again and the performance penalty is tolerable. The length of the idle period until the DPM decides to power down a disk is called idlenessthreshold. In this paper, we study both analytically and experimentally dynamic power management techniques that save energy subject to performance constraints on file access costs. Based on observed workloads of scientific applications and disk characteristics, we provide a methodology for determining file assignment to disks and computing idleness thresholds that result in significant improvements to the energy saved by existing DPMsolutions while meeting response time constraints. We validate our methods with simulations that use traces taken from scientific applications.

  13. 45 CFR 98.3 - Effect on State law.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Effect on State law. 98.3 Section 98.3 Public... Goals, Purposes and Definitions § 98.3 Effect on State law. (a) Nothing in the Act or this part shall be construed to supersede or modify any provision of a State constitution or State law that prohibits the...

  14. The PBase Scientific Workflow Provenance Repository

    Directory of Open Access Journals (Sweden)

    Víctor Cuevas-Vicenttín

    2014-10-01

    Full Text Available Scientific workflows and their supporting systems are becoming increasingly popular for compute-intensive and data-intensive scientific experiments. The advantages scientific workflows offer include rapid and easy workflow design, software and data reuse, scalable execution, sharing and collaboration, and other advantages that altogether facilitate “reproducible science”. In this context, provenance – information about the origin, context, derivation, ownership, or history of some artifact – plays a key role, since scientists are interested in examining and auditing the results of scientific experiments. However, in order to perform such analyses on scientific results as part of extended research collaborations, an adequate environment and tools are required. Concretely, the need arises for a repository that will facilitate the sharing of scientific workflows and their associated execution traces in an interoperable manner, also enabling querying and visualization. Furthermore, such functionality should be supported while taking performance and scalability into account. With this purpose in mind, we introduce PBase: a scientific workflow provenance repository implementing the ProvONE proposed standard, which extends the emerging W3C PROV standard for provenance data with workflow specific concepts. PBase is built on the Neo4j graph database, thus offering capabilities such as declarative and efficient querying. Our experiences demonstrate the power gained by supporting various types of queries for provenance data. In addition, PBase is equipped with a user friendly interface tailored for the visualization of scientific workflow provenance data, making the specification of queries and the interpretation of their results easier and more effective.

  15. Scientific visualization - past, present and future

    International Nuclear Information System (INIS)

    Brodlie, K.

    1995-01-01

    This paper presents a general overview of scientific visualization from a historical orientation. It looks first at visualization before the advent of computers, and then goes on to describe the development of early visualization tools in the 'computer age'. There was a surge of interest in visualization in the latter part of the 1980s, following the publication of an NSF report. This sparked the development of a number of major visualization software systems such as AVS and IRIS Explorer. These are described, and the paper concludes with a look at future developments. ((orig.))

  16. Writing software or writing scientific articles?

    CERN Document Server

    Basaglia, Tullio; Dressendorfer, P V; Larkin, A; Pia, M G

    2008-01-01

    An analysis of publications related to high energy physics computing in refereed journals is presented. The distribution of papers associated to various fields of computing relevant to high energy physics is critically analyzed. The relative publication rate of software papers is evaluated in comparison to other closely related physics disciplines, such as nuclear physics, radiation protection and medical physics, and to hardware publications. The results hint to the fact that, in spite of the significant effort invested in high energy physics computing and its fundamental role in the experiments, this research area is underrepresented in scientific literature; nevertheless the analysis of citations highlights the significant impact of software publications in experimental research.

  17. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2005-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

  18. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  19. 78 FR 30318 - Center for Scientific Review; Notice of Closed Meetings

    Science.gov (United States)

    2013-05-22

    ... Computational Mass-Spectrometry. Date: June 19-21, 2013. Time: 7:00 p.m. to 1:00 p.m. Agenda: To review and... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review... personal privacy. Name of Committee: Center for Scientific Review Special Emphasis Panel; Member Conflict...

  20. Cray XT4: An Early Evaluation for Petascale Scientific Simulation

    International Nuclear Information System (INIS)

    Alam, Sadaf R.; Barrett, Richard F.; Fahey, Mark R.; Kuehn, Jeffery A.; Sankaran, Ramanan; Worley, Patrick H.; Larkin, Jeffrey M.

    2007-01-01

    The scientific simulation capabilities of next generation high-end computing technology will depend on striking a balance among memory, processor, I/O, and local and global network performance across the breadth of the scientific simulation space. The Cray XT4 combines commodity AMD dual core Opteron processor technology with the second generation of Cray's custom communication accelerator in a system design whose balance is claimed to be driven by the demands of scientific simulation. This paper presents an evaluation of the Cray XT4 using microbenchmarks to develop a controlled understanding of individual system components, providing the context for analyzing and comprehending the performance of several petascale-ready applications. Results gathered from several strategic application domains are compared with observations on the previous generation Cray XT3 and other high-end computing systems, demonstrating performance improvements across a wide variety of application benchmark problems.

  1. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  2. Network and computing infrastructure for scientific applications in Georgia

    Science.gov (United States)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  3. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    Directory of Open Access Journals (Sweden)

    Karlheinz Schwarz

    2013-09-01

    Full Text Available Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized below. In each section a further focusing will be provided by occasionally organizing special issues on topics of high interests, collecting papers on fundamental work in the field. More applied papers should be submitted to their corresponding specialist journals. To help us achieve our goal with this journal, we have an excellent editorial board to advise us on the exciting current and future trends in computation from methodology to application. We very much look forward to hearing all about the research going on across the world. [...

  4. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  5. A Hybrid Metaheuristic for Multi-Objective Scientific Workflow Scheduling in a Cloud Environment

    Directory of Open Access Journals (Sweden)

    Nazia Anwar

    2018-03-01

    Full Text Available Cloud computing has emerged as a high-performance computing environment with a large pool of abstracted, virtualized, flexible, and on-demand resources and services. Scheduling of scientific workflows in a distributed environment is a well-known NP-complete problem and therefore intractable with exact solutions. It becomes even more challenging in the cloud computing platform due to its dynamic and heterogeneous nature. The aim of this study is to optimize multi-objective scheduling of scientific workflows in a cloud computing environment based on the proposed metaheuristic-based algorithm, Hybrid Bio-inspired Metaheuristic for Multi-objective Optimization (HBMMO. The strong global exploration ability of the nature-inspired metaheuristic Symbiotic Organisms Search (SOS is enhanced by involving an efficient list-scheduling heuristic, Predict Earliest Finish Time (PEFT, in the proposed algorithm to obtain better convergence and diversity of the approximate Pareto front in terms of reduced makespan, minimized cost, and efficient load balance of the Virtual Machines (VMs. The experiments using different scientific workflow applications highlight the effectiveness, practicality, and better performance of the proposed algorithm.

  6. Topological data analysis for scientific visualization

    CERN Document Server

    Tierny, Julien

    2017-01-01

    Combining theoretical and practical aspects of topology, this book delivers a comprehensive and self-contained introduction to topological methods for the analysis and visualization of scientific data. Theoretical concepts are presented in a thorough but intuitive manner, with many high-quality color illustrations. Key algorithms for the computation and simplification of topological data representations are described in details, and their application is carefully illustrated in a chapter dedicated to concrete use cases. With its fine balance between theory and practice, "Topological Data Analysis for Scientific Visualization" constitutes an appealing introduction to the increasingly important topic of topological data analysis, for lecturers, students and researchers.

  7. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    International Nuclear Information System (INIS)

    Smith, W. Spencer; Koothoor, Mimitha

    2016-01-01

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification

  8. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, W. Spencer; Koothoor, Mimitha [Computing and Software Department, McMaster University, Hamilton (Canada)

    2016-04-15

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification.

  9. Computing on Knights and Kepler Architectures

    International Nuclear Information System (INIS)

    Bortolotti, G; Caberletti, M; Ferraro, A; Giacomini, F; Manzali, M; Maron, G; Salomoni, D; Crimi, G; Zanella, M

    2014-01-01

    A recent trend in scientific computing is the increasingly important role of co-processors, originally built to accelerate graphics rendering, and now used for general high-performance computing. The INFN Computing On Knights and Kepler Architectures (COKA) project focuses on assessing the suitability of co-processor boards for scientific computing in a wide range of physics applications, and on studying the best programming methodologies for these systems. Here we present in a comparative way our results in porting a Lattice Boltzmann code on two state-of-the-art accelerators: the NVIDIA K20X, and the Intel Xeon-Phi. We describe our implementations, analyze results and compare with a baseline architecture adopting Intel Sandy Bridge CPUs.

  10. Magnetic fusion energy and computers: the role of computing in magnetic fusion energy research and development

    International Nuclear Information System (INIS)

    1979-10-01

    This report examines the role of computing in the Department of Energy magnetic confinement fusion program. The present status of the MFECC and its associated network is described. The third part of this report examines the role of computer models in the main elements of the fusion program and discusses their dependence on the most advanced scientific computers. A review of requirements at the National MFE Computer Center was conducted in the spring of 1976. The results of this review led to the procurement of the CRAY 1, the most advanced scientific computer available, in the spring of 1978. The utilization of this computer in the MFE program has been very successful and is also described in the third part of the report. A new study of computer requirements for the MFE program was conducted during the spring of 1979 and the results of this analysis are presented in the forth part of this report

  11. Essential Means for Urban Computing : Specification of Web-Based Computing Platforms for Urban Planning, a Hitchhiker’s Guide

    NARCIS (Netherlands)

    Nourian, P.; Martinez-Ortiz, Carlos; Arroyo Ohori, G.A.K.

    2018-01-01

    This article provides an overview of the specifications of web-based computing platforms for urban data analytics and computational urban planning practice. There are currently a variety of tools and platforms that can be used in urban computing practices, including scientific computing languages,

  12. On the impact of quantum computing technology on future developments in high-performance scientific computing

    NARCIS (Netherlands)

    Möller, M.; Vuik, C.

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to

  13. Advances in Computer Entertainment.

    NARCIS (Netherlands)

    Nijholt, Antinus; Romão, T.; Reidsma, Dennis; Unknown, [Unknown

    2012-01-01

    These are the proceedings of the 9th International Conference on Advances in Computer Entertainment ACE 2012). ACE has become the leading scientific forum for dissemination of cutting-edge research results in the area of entertainment computing. Interactive entertainment is one of the most vibrant

  14. Final Report for 'Center for Technology for Advanced Scientific Component Software'

    International Nuclear Information System (INIS)

    Shasharina, Svetlana

    2010-01-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  15. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  16. Physicist or computer specialist?

    Energy Technology Data Exchange (ETDEWEB)

    Clifton, J S [University College Hospital, London (United Kingdom)

    1966-06-15

    Since to most clinicians physical and computer science are two of the great mysteries of the world, the physicist in a hospital is expected by clinicians to be fully conversant with, and competent to make profound pronouncements on, all methods of computing. specific computing problems, and the suitability of computing machinery ranging from desk calculators to Atlas. This is not surprising since the proportion of the syllabus devoted to physics and mathematics in an M. B. degree is indeed meagre, and the word 'computer' has been surrounded with an aura of mysticism which suggests that it is some fantastic piece of electronic gadgetry comprehensible only to a veritable genius. The clinician consequently turns to the only scientific colleague with whom he has direct contact - the medical physicist - and expects him to be an authority. The physicist is thus thrust, however unwillingly, into the forefront of the advance of computer assistance to scientific medicine. It is therefore essential for him to acquire sufficient knowledge of computing science to enable him to provide satisfactory answers for the clinicianst queries, to proffer more detailed advice as to programming convince clinicians that the computer is really a 'simpleton' which can only add and subtract and even that only under instruction.

  17. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  18. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  19. SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data

    Science.gov (United States)

    Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These

  20. Scientific data analysis on data-parallel platforms.

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, Craig D.; Bayer, Gregory W.; Choe, Yung Ryn; Roe, Diana C.

    2010-09-01

    As scientific computing users migrate to petaflop platforms that promise to generate multi-terabyte datasets, there is a growing need in the community to be able to embed sophisticated analysis algorithms in the computing platforms' storage systems. Data Warehouse Appliances (DWAs) are attractive for this work, due to their ability to store and process massive datasets efficiently. While DWAs have been utilized effectively in data-mining and informatics applications, they remain largely unproven in scientific workloads. In this paper we present our experiences in adapting two mesh analysis algorithms to function on five different DWA architectures: two Netezza database appliances, an XtremeData dbX database, a LexisNexis DAS, and multiple Hadoop MapReduce clusters. The main contribution of this work is insight into the differences between these DWAs from a user's perspective. In addition, we present performance measurements for ten DWA systems to help understand the impact of different architectural trade-offs in these systems.

  1. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    Energy Technology Data Exchange (ETDEWEB)

    Saffer, Shelley (Sam) I.

    2014-12-01

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  2. Testing Scientific Software: A Systematic Literature Review

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  3. Testing Scientific Software: A Systematic Literature Review.

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M

    2014-10-01

    Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

  4. Scientific profile of brain-computer interfaces: Bibliometric analysis in a 10-year period.

    Science.gov (United States)

    Hu, Kejia; Chen, Chao; Meng, Qingyao; Williams, Ziv; Xu, Wendong

    2016-12-02

    With the tremendous advances in the field of brain-computer interfaces (BCI), the literature in this field has grown exponentially; examination of highly cited articles is a tool that can help identify outstanding scientific studies and landmark papers. This study examined the characteristics of 100 highly cited BCI papers over the past 10 years. The Web of Science was searched for highly cited papers related to BCI research published from 2006 to 2015. The top 100 highly cited articles were identified. The number of citations and countries, and the corresponding institutions, year of publication, study design, and research area were noted and analyzed. The 100 highly cited articles had a mean of 137.1(SE: 15.38) citations. These articles were published in 45 high-impact journals, and mostly in TRANSACTIONS ON BIOMEDICAL ENGINEERING (n=14). Of the 100 articles, 72 were original articles and the rest were review articles. These articles came from 15 countries, with the USA contributing most of the highly cited articles (n=52). Fifty-seven institutions produced these 100 highly cited articles, led by Duke University (n=7). This study provides a historical perspective on the progress in the field of BCI, allows recognition of the most influential reports, and provides useful information that can indicate areas requiring further investigation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Scientific co-operation with IAEA in the field of marine radioactivity

    International Nuclear Information System (INIS)

    Bologa, S.Alexandru

    1998-01-01

    A short presentation of the communication presented in the Seminar on public information on peaceful uses of nuclear energy, NUCInfo'98, is given. It is emphasized the formation of RMRI specialists, its scientific co-operations with IAEA between 1986-1998 and participations in related scientific events. In the domain of specialists' formation there are mentioned the participation of members of the Institute in the training courses on marine radioecology, in the regional course on strategies and methodologies for applied marine radioactivity and environmental isotope studies for the Black Sea, and in the inter-regional course devoted to determination of radionuclides in environmental samples, held in 1986, 1994 and 1997, respectively. In the field of research co-operation there are presented the achievements obtained in the framework of five programs dealing mainly with monitoring of marine water, sediment and biota radioactivity in Romanian sector of Black Sea. Also reported are the participations and the contributions in the international scientific events devoted to marine environmental studies and the contributions to international databases

  6. 1 March 2012 - British University of Oxford Head of the Mathematical, Physical & Life Sciences Division A. N. Halliday FRS signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    1 March 2012 - British University of Oxford Head of the Mathematical, Physical & Life Sciences Division A. N. Halliday FRS signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

  7. The graphics future in scientific applications

    International Nuclear Information System (INIS)

    Enderle, G.

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education. (orig.)

  8. Spatial computing in interactive architecture

    NARCIS (Netherlands)

    S.O. Dulman (Stefan); M. Krezer; L. Hovestad

    2014-01-01

    htmlabstractDistributed computing is the theoretical foundation for applications and technologies like interactive architecture, wearable computing, and smart materials. It evolves continuously, following needs rising from scientific developments, novel uses of technology, or simply the curiosity to

  9. Toroidal deuteron accelerator for Mo-98 neutron activation

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Wagner L., E-mail: wagner.leite@ifnmg.edu.br, E-mail: tprcampos@pq.cnpq.br [Instituto Federal do Norte de Minas Gerais (IFN-MG), Montes Claros, MG (Brazil); Campos, Tarcisio P.R. Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    The radionuclide Tc-{sup 99m} is the most useful radioisotope in nuclear medicine. It can be produced by the Mo-99 beta minus decay. Mo-99 has often been produced in a high- flux nuclear reactor through radioactive neutron capture reactions on Mo-98. The present paper provides a preliminary design of a toroidal transmutation system (TTS) based on a toroidal compact deuteron accelerator, which can provide the Mo-98 transmutation into Mo-99. This system is essentially composed of a multi-aperture plasma electrode and a target, submitted to 180 kV, where a positive deuteron beam is accelerated toward a titanium-target loaded with deuterium in which nuclear d-d fusion reactions are induced. The Particle Studio package of the Computer Simulation Technology (CST) software was applied to design, simulate and optimize the deuteron beam on the target. MCNP code provided to neutronic analysis. Based on electromagnetic and neutronic simulations, the neutron yield and reaction rates were estimated. The simulated data allowed appraising the Mo-99 activity. A TTS, in a specific configuration, could produce a total deuterium current of 1.6 A at the target and a neutron yield of 10{sup 13} n.s{sup -1}. In a arrangement of 30 column samples, TTS provides 230 mCi s{sup -1} Mo{sup 99} in each column, which represents 80% of Tc-99m in secular equilibrium. As conclusion, the system holds potential for generating Mo-99 and Tc-99m in a suitable activity in secular equilibrium. (author)

  10. Toroidal deuteron accelerator for Mo-98 neutron activation

    International Nuclear Information System (INIS)

    Araujo, Wagner L.; Campos, Tarcisio P.R. Universidade Federal de Minas Gerais

    2017-01-01

    The radionuclide Tc- 99m is the most useful radioisotope in nuclear medicine. It can be produced by the Mo-99 beta minus decay. Mo-99 has often been produced in a high- flux nuclear reactor through radioactive neutron capture reactions on Mo-98. The present paper provides a preliminary design of a toroidal transmutation system (TTS) based on a toroidal compact deuteron accelerator, which can provide the Mo-98 transmutation into Mo-99. This system is essentially composed of a multi-aperture plasma electrode and a target, submitted to 180 kV, where a positive deuteron beam is accelerated toward a titanium-target loaded with deuterium in which nuclear d-d fusion reactions are induced. The Particle Studio package of the Computer Simulation Technology (CST) software was applied to design, simulate and optimize the deuteron beam on the target. MCNP code provided to neutronic analysis. Based on electromagnetic and neutronic simulations, the neutron yield and reaction rates were estimated. The simulated data allowed appraising the Mo-99 activity. A TTS, in a specific configuration, could produce a total deuterium current of 1.6 A at the target and a neutron yield of 10 13 n.s -1 . In a arrangement of 30 column samples, TTS provides 230 mCi s -1 Mo 99 in each column, which represents 80% of Tc-99m in secular equilibrium. As conclusion, the system holds potential for generating Mo-99 and Tc-99m in a suitable activity in secular equilibrium. (author)

  11. 40 CFR 98.414 - Monitoring and QA/QC requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Monitoring and QA/QC requirements. 98.414 Section 98.414 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.414 Monitoring...

  12. 40 CFR 98.415 - Procedures for estimating missing data.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data. 98.415 Section 98.415 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.415...

  13. 40 CFR 98.196 - Data reporting requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data reporting requirements. 98.196 Section 98.196 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... use in a purification process). If CO2 was used on-site, provide the information in paragraphs (b)(17...

  14. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [Esnet, Berkeley, CA (United States); Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hack, James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Monga, Inder [Esnet, Berkeley, CA (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Riley, Katherine [Argonne National Lab. (ANL), Argonne, IL (United States); Rotman, Lauren [Esnet, Berkeley, CA (United States); Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Tim [Argonne National Lab. (ANL), Argonne, IL (United States); Almgren, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Amundson, J. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Bailey, Stephen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bard, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bloom, Ken [Univ. of Nebraska, Lincoln, NE (United States); Bockelman, Brian [Univ. of Nebraska, Lincoln, NE (United States); Borgland, Anders [SLAC National Accelerator Lab., Menlo Park, CA (United States); Borrill, Julian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Boughezal, Radja [Argonne National Lab. (ANL), Argonne, IL (United States); Brower, Richard [Boston Univ., MA (United States); Cowan, Benjamin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Frontiere, Nicholas [Argonne National Lab. (ANL), Argonne, IL (United States); Fuess, Stuart [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Ge, Lixin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Gnedin, Nick [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gottlieb, Steven [Indiana Univ., Bloomington, IN (United States); Gutsche, Oliver [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Han, T. [Indiana Univ., Bloomington, IN (United States); Heitmann, Katrin [Argonne National Lab. (ANL), Argonne, IL (United States); Hoeche, Stefan [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ko, Kwok [SLAC National Accelerator Lab., Menlo Park, CA (United States); Kononenko, Oleksiy [SLAC National Accelerator Lab., Menlo Park, CA (United States); LeCompte, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); Li, Zheng [SLAC National Accelerator Lab., Menlo Park, CA (United States); Lukic, Zarija [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mori, Warren [Univ. of California, Los Angeles, CA (United States); Ng, Cho-Kuen [SLAC National Accelerator Lab., Menlo Park, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oleynik, Gene [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); O’Shea, Brian [Michigan State Univ., East Lansing, MI (United States); Padmanabhan, Nikhil [Yale Univ., New Haven, CT (United States); Petravick, Donald [Univ. of Illinois, Urbana, IL (United States). National Center for Supercomputing Applications; Petriello, Frank J. [Argonne National Lab. (ANL), Argonne, IL (United States); Pope, Adrian [Argonne National Lab. (ANL), Argonne, IL (United States); Power, John [Argonne National Lab. (ANL), Argonne, IL (United States); Qiang, Ji [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Reina, Laura [Florida State Univ., Tallahassee, FL (United States); Rizzo, Thomas Gerard [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ryne, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schram, Malachi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Spentzouris, P. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Toussaint, Doug [Univ. of Arizona, Tucson, AZ (United States); Vay, Jean Luc [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Viren, B. [Brookhaven National Lab. (BNL), Upton, NY (United States); Wuerthwein, Frank [Univ. of California, San Diego, CA (United States); Xiao, Liling [SLAC National Accelerator Lab., Menlo Park, CA (United States); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-11-29

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greater — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR

  15. Manual for JSSL (JAERI Scientific Subroutine Library)

    International Nuclear Information System (INIS)

    Fujimura, Toichiro; Tsutsui, Tsuneo

    1991-09-01

    JSSL (JAERI Scientific Subroutine Library) is a library of scientific subroutines developed or modified in JAERI. They are classified into sixteen fields (Special Functions, Linear Problems, Eigenvalue and Eigenvector Problems, Non Linear Problems, Mathematical Programming, Extreme Value Problems, Transformations, Functional Approximation Methods, Numerical Differential and Integral Methods, Numerical Differential and Integral Equations, Statistical Functions, Physical Problems, I/O Routines, Plotter Routines, Computer System Functions and Others). This report is the user manual for the revised version of JSSL which involves evaluated subroutines selected from the previous compilation of JSSL, applied in almost all the fields. (author)

  16. 42 CFR 482.98 - Condition of participation: Human resources.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Condition of participation: Human resources. 482.98 Section 482.98 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Specialty Hospitals Transplant Center Process Requirements § 482.98 Condition of participation: Human...

  17. Excitation functions of the 98Mo+d reactions

    International Nuclear Information System (INIS)

    Zarubin, P.P.; Padalko, V.Yu.; Khrisanfov, Yu.V.; Lebedev, P.P.; Podkopaev, Yu.N.

    The excitation functions of the 98 Mo+d reactions were studied. The energy dependence of (d,p),(d,n) and (d,α) reactions was investigated by the activation analysis. The energies of deuterons in the range (6-12) MeV were determined by means of the aluminium filters. 98 Mo foils with surface densities of 1.02, 0.23 and 0.14 mgxcm -2 with 98 Mo enrichment of 94.1% were used as targets. The gamma spectra were measured by a Ge(Li) detector. The 98 Mo(d,p) 99 Mo reaction excitation function was determined via detection of 739 and 181 keV γ-radiation of 99 Mo (Tsub(1/2)=66.47h); 140 keV γ-radiation of 99 Tc (Tsub(1/2)=6h) was detected for the 98 Mo(d,n) 99 Tc reaction excitation function determination and 460, 568, 1091, 1200 and 1492 keV γ-quanta of 96 Nb (Tsub(1/2)=23.35h) - for the 98 Mo(d,α) 96 Nb reaction. In the excitation function the wide extremum was observed at Esub(d) approximately 10 MeV. The ratio of cross sections σsup(m)(d,n)/σ(d,p) on the 98 Mo target was determined. The ratio σsup(m)(d,n)/σ(d,p) was found to be decreasing function of the deuteron energy. The relative cross sections were determined with an accuracy of +-5%, while for the absolute values of cross sections the accuracy was +-15%

  18. 45 CFR 98.13 - Applying for Funds.

    Science.gov (United States)

    2010-10-01

    ... General Application Procedures § 98.13 Applying for Funds. The Lead Agency of a State or Territory shall... lobbying payments; (3) An assurance that the Lead Agency provides a drug-free workplace pursuant to 45 CFR... in § 98.17; and (d) Such other information as specified by the Secretary. ...

  19. 7 CFR 1786.98 - Discounted present value.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Discounted present value. 1786.98 Section 1786.98... Discounted present value. (a) The Discounted Present Value shall be calculated by RUS before prepayment is made by summing the present values of all remaining payments on all outstanding notes according to the...

  20. 77 FR 12823 - Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2012-03-02

    ... Early Career technical talks Summary of Applied Math and Computer Science Workshops ASCR's new SBIR... least 5 business days prior to the meeting. Reasonable provision will be made to include the scheduled... the orderly conduct of business. Public comment will follow the 10-minute rule. Minutes: The minutes...

  1. 40 CFR 98.33 - Calculating GHG emissions.

    Science.gov (United States)

    2010-07-01

    ...)(2)(ii) of this section. MVC = Molar volume conversion factor (849.5 scf per kg-mole at standard... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING General Stationary Fuel Combustion Sources § 98.33 Calculating... volume of fuel combusted per year, from company records as defined in § 98.6 (express mass in short tons...

  2. Dynamic association of NUP98 with the human genome.

    Directory of Open Access Journals (Sweden)

    Yun Liang

    Full Text Available Faithful execution of developmental gene expression programs occurs at multiple levels and involves many different components such as transcription factors, histone-modification enzymes, and mRNA processing proteins. Recent evidence suggests that nucleoporins, well known components that control nucleo-cytoplasmic trafficking, have wide-ranging functions in developmental gene regulation that potentially extend beyond their role in nuclear transport. Whether the unexpected role of nuclear pore proteins in transcription regulation, which initially has been described in fungi and flies, also applies to human cells is unknown. Here we show at a genome-wide level that the nuclear pore protein NUP98 associates with developmentally regulated genes active during human embryonic stem cell differentiation. Overexpression of a dominant negative fragment of NUP98 levels decreases expression levels of NUP98-bound genes. In addition, we identify two modes of developmental gene regulation by NUP98 that are differentiated by the spatial localization of NUP98 target genes. Genes in the initial stage of developmental induction can associate with NUP98 that is embedded in the nuclear pores at the nuclear periphery. Alternatively, genes that are highly induced can interact with NUP98 in the nuclear interior, away from the nuclear pores. This work demonstrates for the first time that NUP98 dynamically associates with the human genome during differentiation, revealing a role of a nuclear pore protein in regulating developmental gene expression programs.

  3. Tools for Reproducibility and Extensibility in Scientific Research

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Open inquiry through reproducing results is fundamental to the scientific process. Contemporary research relies on software engineering pipelines to collect, process, and analyze data. The open source projects within Project Jupyter facilitate these objectives by bringing software engineering within the context of scientific communication. We will highlight specific projects that are computational building blocks for scientific communication, starting with the Jupyter Notebook. We will also explore applications of projects that build off of the Notebook such as Binder, JupyterHub, and repo2docker. We will discuss how these projects can individually and jointly improve reproducibility in scientific communication. Finally, we will demonstrate applications of Jupyter software that allow researchers to build upon the code of other scientists, both to extend their work and the work of others.    There will be a follow-up demo session in the afternoon, hosted by iML. Details can be foun...

  4. Neuromorphic Computing, Architectures, Models, and Applications. A Beyond-CMOS Approach to Future Computing, June 29-July 1, 2016, Oak Ridge, TN

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Schuman, Catherine [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hylton, Todd [Brain Corporation, San Diego, CA (United States); Li, Hai [Univ. of Pittsburgh, PA (United States); Pino, Robinson [US Dept. of Energy, Washington, DC (United States)

    2016-12-31

    The White House and Department of Energy have been instrumental in driving the development of a neuromorphic computing program to help the United States continue its lead in basic research into (1) Beyond Exascale—high performance computing beyond Moore’s Law and von Neumann architectures, (2) Scientific Discovery—new paradigms for understanding increasingly large and complex scientific data, and (3) Emerging Architectures—assessing the potential of neuromorphic and quantum architectures. Neuromorphic computing spans a broad range of scientific disciplines from materials science to devices, to computer science, to neuroscience, all of which are required to solve the neuromorphic computing grand challenge. In our workshop we focus on the computer science aspects, specifically from a neuromorphic device through an application. Neuromorphic devices present a very different paradigm to the computer science community from traditional von Neumann architectures, which raises six major questions about building a neuromorphic application from the device level. We used these fundamental questions to organize the workshop program and to direct the workshop panels and discussions. From the white papers, presentations, panels, and discussions, there emerged several recommendations on how to proceed.

  5. Sergio Bertolucci - Towards dynamic scientific research

    CERN Multimedia

    2009-01-01

    Sergio Bertolucci has become Director for Research and Scientific Computing at the moment when the LHC is almost ready to deliver its first physics data. In this interview, he explains the importance of the perfect mix of collaboration and competition that will make the LHC scientific programme successful. Sergio Bertolucci’s enthusiasm for being at CERN at this historic time is evident from the first minute of the interview and has not waned after an hour speaking with us. Bertolucci’s recipe for a successful start-up of the physics delivery phase of the LHC is "Festina lente", a Latin motto that means something like ‘hasten slowly’. "The LHC is probably the biggest and most complex scientific enterprise ever undertaken by humanity," says Bertolucci. "It will certainly lead us towards a new phase of our understanding of the Universe. Nature is already giving us some indications but only the LHC will allow us to observe the ne...

  6. Impact of research investment on scientific productivity of junior researchers.

    Science.gov (United States)

    Farrokhyar, Forough; Bianco, Daniela; Dao, Dyda; Ghert, Michelle; Andruszkiewicz, Nicole; Sussman, Jonathan; Ginsberg, Jeffrey S

    2016-12-01

    There is a demand for providing evidence on the effectiveness of research investments on the promotion of novice researchers' scientific productivity and production of research with new initiatives and innovations. We used a mixed method approach to evaluate the funding effect of the New Investigator Fund (NIF) by comparing scientific productivity between award recipients and non-recipients. We reviewed NIF grant applications submitted from 2004 to 2013. Scientific productivity was assessed by confirming the publication of the NIF-submitted application. Online databases were searched, independently and in duplicate, to locate the publications. Applicants' perceptions and experiences were collected through a short survey and categorized into specified themes. Multivariable logistic regression was performed. Odds ratios (OR) with 95 % confidence intervals (CI) are reported. Of 296 applicants, 163 (55 %) were awarded. Gender, affiliation, and field of expertise did not affect funding decisions. More physicians with graduate education (32.0 %) and applicants with a doctorate degree (21.5 %) were awarded than applicants without postgraduate education (9.8 %). Basic science research (28.8 %), randomized controlled trials (24.5 %), and feasibility/pilot trials (13.3 %) were awarded more than observational designs (p   scientific productivity and professional growth of novice investigators and production of research with new initiatives and innovations. Further efforts are recommended to enhance the support of small grant funding programs.

  7. Program MASTERCALC: an interactive computer program for radioanalytical computations. Description and operating instructions

    International Nuclear Information System (INIS)

    Goode, W.

    1980-10-01

    MASTERCALC is a computer program written to support radioanalytical computations in the Los Alamos Scientific Laboratory (LASL) Environmental Surveillance Group. Included in the program are routines for gross alpha and beta, 3 H, gross gamma, 90 Sr and alpha spectroscopic determinations. A description of MASTERCALC is presented and its source listing is included. Operating instructions and example computing sessions are given for each type of analysis

  8. ASCR Cybersecurity for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Piesert, Sean

    2015-02-27

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE to execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.

  9. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  10. Human Endometrial CD98 Is Essential for Blastocyst Adhesion

    Science.gov (United States)

    Domínguez, Francisco; Simón, Carlos; Quiñonero, Alicia; Ramírez, Miguel Ángel; González-Muñoz, Elena; Burghardt, Hans; Cervero, Ana; Martínez, Sebastián; Pellicer, Antonio; Palacín, Manuel; Sánchez-Madrid, Francisco; Yáñez-Mó, María

    2010-01-01

    Background Understanding the molecular basis of embryonic implantation is of great clinical and biological relevance. Little is currently known about the adhesion receptors that determine endometrial receptivity for embryonic implantation in humans. Methods and Principal Findings Using two human endometrial cell lines characterized by low and high receptivity, we identified the membrane receptor CD98 as a novel molecule selectively and significantly associated with the receptive phenotype. In human endometrial samples, CD98 was the only molecule studied whose expression was restricted to the implantation window in human endometrial tissue. CD98 expression was restricted to the apical surface and included in tetraspanin-enriched microdomains of primary endometrial epithelial cells, as demonstrated by the biochemical association between CD98 and tetraspanin CD9. CD98 expression was induced in vitro by treatment of primary endometrial epithelial cells with human chorionic gonadotropin, 17-β-estradiol, LIF or EGF. Endometrial overexpression of CD98 or tetraspanin CD9 greatly enhanced mouse blastocyst adhesion, while their siRNA-mediated depletion reduced the blastocyst adhesion rate. Conclusions These results indicate that CD98, a component of tetraspanin-enriched microdomains, appears to be an important determinant of human endometrial receptivity during the implantation window. PMID:20976164

  11. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  12. Extreme Scale Computing for First-Principles Plasma Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Choogn-Seock [Princeton University

    2011-10-12

    World superpowers are in the middle of the “Computnik” race. US Department of Energy (and National Nuclear Security Administration) wishes to launch exascale computer systems into the scientific (and national security) world by 2018. The objective is to solve important scientific problems and to predict the outcomes using the most fundamental scientific laws, which would not be possible otherwise. Being chosen into the next “frontier” group can be of great benefit to a scientific discipline. An extreme scale computer system requires different types of algorithms and programming philosophy from those we have been accustomed to. Only a handful of scientific codes are blessed to be capable of scalable usage of today’s largest computers in operation at petascale (using more than 100,000 cores concurrently). Fortunately, a few magnetic fusion codes are competing well in this race using the “first principles” gyrokinetic equations.These codes are beginning to study the fusion plasma dynamics in full-scale realistic diverted device geometry in natural nonlinear multiscale, including the large scale neoclassical and small scale turbulence physics, but excluding some ultra fast dynamics. In this talk, most of the above mentioned topics will be introduced at executive level. Representative properties of the extreme scale computers, modern programming exercises to take advantage of them, and different philosophies in the data flows and analyses will be presented. Examples of the multi-scale multi-physics scientific discoveries made possible by solving the gyrokinetic equations on extreme scale computers will be described. Future directions into “virtual tokamak experiments” will also be discussed.

  13. Engineering of systems for application of scientific computing in industry

    OpenAIRE

    Loeve, W.; Loeve, W.

    1992-01-01

    Mathematics software is of growing importance for computer simulation in industrial computer aided engineering. To be applicable in industry the mathematics software and supporting software must be structured in such a way that functions and performance can be maintained easily. In the present paper a method is described for development of mathematics software in such a way that this requirement can be met.

  14. Scientific Inquiry Self-Efficacy and Computer Game Self-Efficacy as Predictors and Outcomes of Middle School Boys' and Girls' Performance in a Science Assessment in a Virtual Environment

    Science.gov (United States)

    Bergey, Bradley W.; Ketelhut, Diane Jass; Liang, Senfeng; Natarajan, Uma; Karakus, Melissa

    2015-01-01

    The primary aim of the study was to examine whether performance on a science assessment in an immersive virtual environment was associated with changes in scientific inquiry self-efficacy. A secondary aim of the study was to examine whether performance on the science assessment was equitable for students with different levels of computer game…

  15. Computers, pattern, chaos and beauty

    CERN Document Server

    Pickover, Clifford A

    1980-01-01

    Combining fractal theory with computer art, this book introduces a creative use of computers. It describes graphic methods for detecting patterns in complicated data and illustrates simple techniques for visualizing chaotic behavior. ""Beautiful."" - Martin Gardner, Scientific American. Over 275 illustrations, 29 in color.

  16. 46 CFR 98.25-50 - Filling and discharge pipes.

    Science.gov (United States)

    2010-10-01

    ... with a secondary remote control of a type acceptable to the Commandant. (c) The excess flow, internal... 46 Shipping 4 2010-10-01 2010-10-01 false Filling and discharge pipes. 98.25-50 Section 98.25-50... § 98.25-50 Filling and discharge pipes. (a) Filling connections shall be provided with one of the...

  17. On the Support of Scientific Workflows over Pub/Sub Brokers

    Directory of Open Access Journals (Sweden)

    Edwin Cedeño

    2013-08-01

    Full Text Available The execution of scientific workflows is gaining importance as more computing resources are available in the form of grid environments. The Publish/Subscribe paradigm offers well-proven solutions for sustaining distributed scenarios while maintaining the high level of task decoupling required by scientific workflows. In this paper, we propose a new model for supporting scientific workflows that improves the dissemination of control events. The proposed solution is based on the mapping of workflow tasks to the underlying Pub/Sub event layer, and the definition of interfaces and procedures for execution on brokers. In this paper we also analyze the strengths and weaknesses of current solutions that are based on existing message exchange models for scientific workflows. Finally, we explain how our model improves the information dissemination, event filtering, task decoupling and the monitoring of scientific workflows.

  18. On the support of scientific workflows over Pub/Sub brokers.

    Science.gov (United States)

    Morales, Augusto; Robles, Tomas; Alcarria, Ramon; Cedeño, Edwin

    2013-08-20

    The execution of scientific workflows is gaining importance as more computing resources are available in the form of grid environments. The Publish/Subscribe paradigm offers well-proven solutions for sustaining distributed scenarios while maintaining the high level of task decoupling required by scientific workflows. In this paper, we propose a new model for supporting scientific workflows that improves the dissemination of control events. The proposed solution is based on the mapping of workflow tasks to the underlying Pub/Sub event layer, and the definition of interfaces and procedures for execution on brokers. In this paper we also analyze the strengths and weaknesses of current solutions that are based on existing message exchange models for scientific workflows. Finally, we explain how our model improves the information dissemination, event filtering, task decoupling and the monitoring of scientific workflows.

  19. Improving the Efficiency of the Nodal Integral Method With the Portable, Extensible Tool-kit for Scientific Computation

    International Nuclear Information System (INIS)

    Toreja, Allen J.; Uddin, Rizwan

    2002-01-01

    An existing implementation of the nodal integral method for the time-dependent convection-diffusion equation is modified to incorporate various PETSc (Portable, Extensible Tool-kit for Scientific Computation) solver and pre-conditioner routines. In the modified implementation, the default iterative Gauss-Seidel solver is replaced with one of the following PETSc iterative linear solver routines: Generalized Minimal Residuals, Stabilized Bi-conjugate Gradients, or Transpose-Free Quasi-Minimal Residuals. For each solver, a Jacobi or a Successive Over-Relaxation pre-conditioner is used. Two sample problems, one with a low Peclet number and one with a high Peclet number, are solved using the new implementation. In all the cases tested, the new implementation with the PETSc solver routines outperforms the original Gauss-Seidel implementation. Moreover, the PETSc Stabilized Bi-conjugate Gradients routine performs the best on the two sample problems leading to CPU times that are less than half the CPU times of the original implementation. (authors)

  20. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  1. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  2. CERN Scientific Book Fair 2013

    CERN Multimedia

    CERN Library

    2013-01-01

    The CERN Bookshop and CERN Library invite you to attend the 2013 CERN Book Fair, a two-day scientific event offering you the opportunity to meet key publishers and to browse and purchase books at significant discounts.   Key publishers will present a selection of titles in physics, technology, mathematics, engineering, computing and popular science. You are welcome to come along and meet the publishers’ representatives or simply have a look at the books on sale. The fair will take place in the Main Building (Bldg. 500) on the ground floor near Restaurant 1 on Monday 9 and Tuesday 10 September. Participating or represented publishers include: Oxford University Press, Princeton University Press, Springer, Wiley, and World Scientific-Imperial College Press. Fair opening times:  - Monday 9 September 9:00 - 18:00  - Tuesday 10 September 9:00 - 18:00

  3. 2004 Annual Scientific Conference. Program and Abstracts

    International Nuclear Information System (INIS)

    Barborica, Andrei; Bulinski, Mircea; Stefan, Sabina

    2005-01-01

    As consequence of a long experience in educational as well as research field the Physics Department of the Bucharest University is offering high-standard undergraduate and graduate programs of higher education in physical sciences. The long-term strategy adopted by the faculty was focused on developing scientific research in modern topics of theoretical, experimental and applied physics as well as in inter-disciplinary fields as biophysics, medical physics, physics and protection of environment, physics-computer science. Following this strategy the Faculty of Physics has diversified the research activity, developing new research laboratories and encouraging the academic community to approach modern and competitive research projects. Every year the Physics Department of the University of Bucharest organizes the 'Annual Scientific Conference' to present the most interesting scientific results, obtained within the department. This 2004 scientific session is opened also to the interested physics researchers from other institutes and universities in the country. This scientific event represents a recognition and a continuation of the prestigious tradition of physics research performed within University. The scientific research in the Physics Department is performed in groups and research centers, the terminal year undergraduate students and graduate students being involved in a high extent in the research works. There are 5 research centers with the status of Center of excellence in research. The long-term strategy adopted by the faculty was focused on developing the scientific research in modern topics of theoretical, experimental and applied physics, as well as in inter-disciplinary fields as biophysics, medical physics, physics and protection of the environment, physics - computer science. Following this strategy, the Faculty of Physics has diversified the research activity, developing new research laboratories and encouraging the academic community to perform modern

  4. 2003 Annual Scientific Conference. Program and Abstracts

    International Nuclear Information System (INIS)

    Barborica, Andrei; Bulinski, Mircea

    2003-01-01

    As consequence of a long experience in educational as well as research field the Physics Department of the Bucharest University is offering high-standard undergraduate and graduate programs of higher education in physical sciences. The long-term strategy adopted by the faculty was focused on developing scientific research in modern topics of theoretical, experimental and applied physics as well as in inter-disciplinary fields as biophysics, medical physics, physics and protection of environment, physics-computer science. Following this strategy the Faculty of Physics has diversified the research activity, developing new research laboratories and encouraging the academic community to approach modern and competitive research projects. Every year the Physics Department of the University of Bucharest organizes the 'Annual Scientific Conference' to present the most interesting scientific results, obtained within the department. This scientific session is opened also to the interested physics researchers from other institutes and universities in the country. This scientific event represents a recognition and a continuation of the prestigious tradition of physics research performed within University. The scientific research in the Physics Department is performed in groups and research centers, the terminal year undergraduate students and graduate students being involved in a high extent in the research works. There are 5 research centers with the status of Center of excellence in research. The long-term strategy adopted by the faculty was focused on developing the scientific research in modern topics of theoretical, experimental and applied physics, as well as in inter-disciplinary fields as biophysics, medical physics, physics and protection of the environment, physics - computer science. Following this strategy, the Faculty of Physics has diversified the research activity, developing new research laboratories and encouraging the academic community to perform modern and

  5. 2002 Annual Scientific Conference. Program and Abstracts

    International Nuclear Information System (INIS)

    Barborica, Andrei; Bulinski, Mircea; Dinca, Mihai P.

    2002-01-01

    As consequence of a long experience in educational as well as research field the Physics Department of the Bucharest University is offering high-standard undergraduate and graduate programs of higher education in physical sciences. The long-term strategy adopted by the faculty was focused on developing scientific research in modern topics of theoretical, experimental and applied physics as well as in inter-disciplinary fields as biophysics, medical physics, physics and protection of environment, physics-computer science. Following this strategy the Faculty of Physics has diversified the research activity, developing new research laboratories and encouraging the academic community to approach modern and competitive research projects. Every year the Physics Department of the University of Bucharest organizes the 'Annual Scientific Conference' to present the most interesting scientific results, obtained within the department. This scientific session is opened also to the interested physics researchers from other institutes and universities in the country. This scientific event represents a recognition and a continuation of the prestigious tradition of physics research performed within University. The scientific research in the Physics Department is performed in groups and research centers, the terminal year undergraduate students and graduate students being involved in a high extent in the research works. There are 5 research centers with the status of Center of excellence in research. The long-term strategy adopted by the faculty was focused on developing the scientific research in modern topics of theoretical, experimental and applied physics, as well as in inter-disciplinary fields as biophysics, medical physics, physics and protection of the environment, physics - computer science. Following this strategy, the Faculty of Physics has diversified the research activity, developing new research laboratories and encouraging the academic community to perform modern and

  6. The InSAR Scientific Computing Environment (ISCE): A Python Framework for Earth Science

    Science.gov (United States)

    Rosen, P. A.; Gurrola, E. M.; Agram, P. S.; Sacco, G. F.; Lavalle, M.

    2015-12-01

    The InSAR Scientific Computing Environment (ISCE, funded by NASA ESTO) provides a modern computing framework for geodetic image processing of InSAR data from a diverse array of radar satellites and aircraft. ISCE is both a modular, flexible, and extensible framework for building software components and applications as well as a toolbox of applications for processing raw or focused InSAR and Polarimetric InSAR data. The ISCE framework contains object-oriented Python components layered to construct Python InSAR components that manage legacy Fortran/C InSAR programs. Components are independently configurable in a layered manner to provide maximum control. Polymorphism is used to define a workflow in terms of abstract facilities for each processing step that are realized by specific components at run-time. This enables a single workflow to work on either raw or focused data from all sensors. ISCE can serve as the core of a production center to process Level-0 radar data to Level-3 products, but is amenable to interactive processing approaches that allow scientists to experiment with data to explore new ways of doing science with InSAR data. The NASA-ISRO SAR (NISAR) Mission will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystems. ISCE is planned as the foundational element in processing NISAR data, enabling a new class of analyses that take greater advantage of the long time and large spatial scales of these new data. NISAR will be but one mission in a constellation of radar satellites in the future delivering such data. ISCE currently supports all publicly available strip map mode space-borne SAR data since ERS and is expected to include support for upcoming missions. ISCE has been incorporated into two prototype cloud-based systems that have demonstrated its elasticity in addressing larger data processing problems in a "production" context and its ability to be

  7. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  8. Opportunities for discovery: Theory and computation in Basic Energy Sciences

    Energy Technology Data Exchange (ETDEWEB)

    Harmon, Bruce; Kirby, Kate; McCurdy, C. William

    2005-01-11

    New scientific frontiers, recent advances in theory, and rapid increases in computational capabilities have created compelling opportunities for theory and computation to advance the scientific mission of the Office of Basic Energy Sciences (BES). The prospects for success in the experimental programs of BES will be enhanced by pursuing these opportunities. This report makes the case for an expanded research program in theory and computation in BES. The Subcommittee on Theory and Computation of the Basic Energy Sciences Advisory Committee was charged with identifying current and emerging challenges and opportunities for theoretical research within the scientific mission of BES, paying particular attention to how computing will be employed to enable that research. A primary purpose of the Subcommittee was to identify those investments that are necessary to ensure that theoretical research will have maximum impact in the areas of importance to BES, and to assure that BES researchers will be able to exploit the entire spectrum of computational tools, including leadership class computing facilities. The Subcommittee s Findings and Recommendations are presented in Section VII of this report.

  9. ANL statement of site strategy for computing workstations

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R. (ed.); Boxberger, L.M.; Amiot, L.W.; Bretscher, M.E.; Engert, D.E.; Moszur, F.M.; Mueller, C.J.; O' Brien, D.E.; Schlesselman, C.G.; Troyer, L.J.

    1991-11-01

    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is to develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.

  10. Operation ARA: A Computerized Learning Game that Teaches Critical Thinking and Scientific Reasoning

    Science.gov (United States)

    Halpern, Diane F.; Millis, Keith; Graesser, Arthur C.; Butler, Heather; Forsyth, Carol; Cai, Zhiqiang

    2012-01-01

    Operation ARA (Acquiring Research Acumen) is a computerized learning game that teaches critical thinking and scientific reasoning. It is a valuable learning tool that utilizes principles from the science of learning and serious computer games. Students learn the skills of scientific reasoning by engaging in interactive dialogs with avatars. They…

  11. ZIVIS: A City Computing Platform Based on Volunteer Computing

    International Nuclear Information System (INIS)

    Antoli, B.; Castejon, F.; Giner, A.; Losilla, G.; Reynolds, J. M.; Rivero, A.; Sangiao, S.; Serrano, F.; Tarancon, A.; Valles, R.; Velasco, J. L.

    2007-01-01

    Abstract Volunteer computing has come up as a new form of distributed computing. Unlike other computing paradigms like Grids, which use to be based on complex architectures, volunteer computing has demonstrated a great ability to integrate dispersed, heterogeneous computing resources with ease. This article presents ZIVIS, a project which aims to deploy a city-wide computing platform in Zaragoza (Spain). ZIVIS is based on BOINC (Berkeley Open Infrastructure for Network Computing), a popular open source framework to deploy volunteer and desktop grid computing systems. A scientific code which simulates the trajectories of particles moving inside a stellarator fusion device, has been chosen as the pilot application of the project. In this paper we describe the approach followed to port the code to the BOINC framework as well as some novel techniques, based on standard Grid protocols, we have used to access the output data present in the BOINC server from a remote visualizer. (Author)

  12. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  13. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  14. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  15. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting

  16. TME (Task Mapping Editor): tool for executing distributed parallel computing. TME user's manual

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Yamagishi, Nobuhiro; Imamura, Toshiyuki

    2000-03-01

    At the Center for Promotion of Computational Science and Engineering, a software environment PPExe has been developed to support scientific computing on a parallel computer cluster (distributed parallel scientific computing). TME (Task Mapping Editor) is one of components of the PPExe and provides a visual programming environment for distributed parallel scientific computing. Users can specify data dependence among tasks (programs) visually as a data flow diagram and map these tasks onto computers interactively through GUI of TME. The specified tasks are processed by other components of PPExe such as Meta-scheduler, RIM (Resource Information Monitor), and EMS (Execution Management System) according to the execution order of these tasks determined by TME. In this report, we describe the usage of TME. (author)

  17. Numerical Recipes in C++: The Art of Scientific Computing (2nd edn). Numerical Recipes Example Book (C++) (2nd edn). Numerical Recipes Multi-Language Code CD ROM with LINUX or UNIX Single-Screen License Revised Version

    International Nuclear Information System (INIS)

    Borcherds, P

    2003-01-01

    The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR

  18. 3rd International Conference on Computational Mathematics and Computational Geometry

    CERN Document Server

    Ravindran, Anton

    2016-01-01

    This volume presents original research contributed to the 3rd Annual International Conference on Computational Mathematics and Computational Geometry (CMCGS 2014), organized and administered by Global Science and Technology Forum (GSTF). Computational Mathematics and Computational Geometry are closely related subjects, but are often studied by separate communities and published in different venues. This volume is unique in its combination of these topics. After the conference, which took place in Singapore, selected contributions chosen for this volume and peer-reviewed. The section on Computational Mathematics contains papers that are concerned with developing new and efficient numerical algorithms for mathematical sciences or scientific computing. They also cover analysis of such algorithms to assess accuracy and reliability. The parts of this project that are related to Computational Geometry aim to develop effective and efficient algorithms for geometrical applications such as representation and computati...

  19. Development of the computer network of IFIN-HH

    International Nuclear Information System (INIS)

    Danet, A.; Mirica, M.; Constantinescu, S.

    1998-01-01

    The general computer network of Horia Hulubei National Institute for Physics and Nuclear Engineering (IFIN-HH), as part of RNC (Romanian National Computer Network for scientific research and technological development), offers the Romanian physics research community an efficient and cost-effective infrastructure to communicate and collaborate with fellow researchers abroad, and to collect and exchange the most up-to-date information in their research area. RNC is the national project co-ordinated and established by the Ministry of Research and Technology targeted on the following main objectives: - setting up a technical and organizational infrastructure meant to provide national and international electronic services for the Romanian scientific research community; - providing a rapid and competitive tool for the exchange information in the framework of R-D community; - using the scientific and technical data bases available in the country and offered by the national networks from other countries through international networks; - providing a support for information, documentation, scientific and technical co-operation. The guiding principle in elaborating the project of general computer network of IFIN-HH was to implement an open system based on OSI standards without technical barriers in communication between different communities using different computing hardware and software. The major objectives achieved in 1997 in the direction of developing the general computer network of IFIN-HH (over 250 computers connected) were: - connecting all the existing and newly installed computer equipment and providing an adequate connectivity; - providing the usual Internet services: e-mail, ftp, telnet, finger, gopher; - providing access to the World Wide Web resources; - providing on-line statistics of IP traffic (input and output) of each node of the domain computer network; - improving the performance of the connection with the central node RNC. (authors)

  20. 40 CFR 98.70 - Definition of source category.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Definition of source category. 98.70... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.70 Definition of source category...-based feedstock produced via steam reforming of a hydrocarbon. (b) Ammonia manufacturing processes in...

  1. 1st October 2010 - Chinese Vice President of the Academy of Sciences signing the guest book and exchanging gifts with CERN Director for Research and Scientific Computing S. Bertolucci, witnessed by Adviser R. Voss

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    1st October 2010 - Chinese Vice President of the Academy of Sciences signing the guest book and exchanging gifts with CERN Director for Research and Scientific Computing S. Bertolucci, witnessed by Adviser R. Voss

  2. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  3. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    Science.gov (United States)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires

  4. 7 CFR 98.1 - General.

    Science.gov (United States)

    2010-01-01

    ..., READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98.1 General. Analytical services of meat and meat food products are performed for fat, moisture, salt, protein...

  5. Singularity hypotheses a scientific and philosophical assessment

    CERN Document Server

    Moor, James; Søraker, Johnny; Steinhart, Eric

    2012-01-01

    Singularity Hypotheses: A Scientific and Philosophical Assessment offers authoritative, jargon-free essays and critical commentaries on accelerating technological progress and the notion of technological singularity. It focuses on conjectures about the intelligence explosion, transhumanism, and whole brain emulation. Recent years have seen a plethora of forecasts about the profound, disruptive impact that is likely to result from further progress in these areas. Many commentators however doubt the scientific rigor of these forecasts, rejecting them as speculative and unfounded. We therefore invited prominent computer scientists, physicists, philosophers, biologists, economists and other thinkers to assess the singularity hypotheses. Their contributions go beyond speculation, providing deep insights into the main issues and a balanced picture of the debate.

  6. DIRECTIONS OF USEING ELECTRONIC MEANS IN TEACHING SCIENTIFIC STYLE OF SPEECH

    Directory of Open Access Journals (Sweden)

    Л Б Белоглазова

    2015-12-01

    Full Text Available The article notes that the modern human cognitive activity related to the implementation of information processes, by means of information and communication technologies. The author identifies three main areas of use of electronic media in teaching scientific style of speech. These include: 1 work with electronic textbooks; 2 search of the scientific literature in electronic libraries; 3 use computer software for content analysis of scientific texts. The analysis of these areas is done. It stated that the introduction in the educational process should be accompanied by electronic means creating specialized audiences and providing them with modern equipment.

  7. An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Randal Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-22

    CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest and emerging HPC systems.

  8. Evaluating the Efficacy of the Cloud for Cluster Computation

    Science.gov (United States)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  9. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  10. Trends in scientific computing applied to petroleum exploration and production

    International Nuclear Information System (INIS)

    Guevara, Saul E; Piedrahita, Carlos E; Arroyo, Elkin R; Soto Rodolfo

    2002-01-01

    Current trends of computational tools in the upstream of the petroleum industry ore presented herein several results and images obtained through commercial programs and through in-house software developments illustrate the topics discussed. They include several types of problems and programming paradigms. Emphasis is made on the future of parallel processing through the use of affordable, open systems, as the Linux system. This kind of technologies will likely make possible new research and industry applications, since quite advanced computational resources will be available to many people working in the area

  11. Activities of the Research Institute for Advanced Computer Science

    Science.gov (United States)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  12. 40 CFR 98.417 - Records that must be retained.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.417 Records that... reported under § 98.416. (2) Records documenting the initial and periodic calibration of the gas...

  13. Essential Means for Urban Computing: Specification of Web-Based Computing Platforms for Urban Planning, a Hitchhiker’s Guide

    OpenAIRE

    Pirouz Nourian; Carlos Martinez-Ortiz; Ken Arroyo Ohori

    2018-01-01

    This article provides an overview of the specifications of web-based computing platforms for urban data analytics and computational urban planning practice. There are currently a variety of tools and platforms that can be used in urban computing practices, including scientific computing languages, interactive web languages, data sharing platforms and still many desktop computing environments, e.g., GIS software applications. We have reviewed a list of technologies considering their potential ...

  14. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  15. 9 CFR 98.36 - Animal semen from Canada.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Animal semen from Canada. 98.36... CERTAIN ANIMAL EMBRYOS AND ANIMAL SEMEN Certain Animal Semen § 98.36 Animal semen from Canada. (a) General importation requirements for animal semen from Canada. If the product is . . . Then . . . (1) Equine semen...

  16. Classical and quantum computing with C++ and Java simulations

    CERN Document Server

    Hardy, Y

    2001-01-01

    Classical and Quantum computing provides a self-contained, systematic and comprehensive introduction to all the subjects and techniques important in scientific computing. The style and presentation are readily accessible to undergraduates and graduates. A large number of examples, accompanied by complete C++ and Java code wherever possible, cover every topic. Features and benefits: - Comprehensive coverage of the theory with many examples - Topics in classical computing include boolean algebra, gates, circuits, latches, error detection and correction, neural networks, Turing machines, cryptography, genetic algorithms - For the first time, genetic expression programming is presented in a textbook - Topics in quantum computing include mathematical foundations, quantum algorithms, quantum information theory, hardware used in quantum computing This book serves as a textbook for courses in scientific computing and is also very suitable for self-study. Students, professionals and practitioners in computer...

  17. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  18. Integrated computer control system CORBA-based simulator FY98 LDRD project final summary report

    International Nuclear Information System (INIS)

    Bryant, R M; Holloway, F W; Van Arsdall, P J.

    1999-01-01

    The CORBA-based Simulator was a Laboratory Directed Research and Development (LDRD) project that applied simulation techniques to explore critical questions about distributed control architecture. The simulator project used a three-prong approach comprised of a study of object-oriented distribution tools, computer network modeling, and simulation of key control system scenarios. This summary report highlights the findings of the team and provides the architectural context of the study. For the last several years LLNL has been developing the Integrated Computer Control System (ICCS), which is an abstract object-oriented software framework for constructing distributed systems. The framework is capable of implementing large event-driven control systems for mission-critical facilities such as the National Ignition Facility (NIF). Tools developed in this project were applied to the NIF example architecture in order to gain experience with a complex system and derive immediate benefits from this LDRD. The ICCS integrates data acquisition and control hardware with a supervisory system, and reduces the amount of new coding and testing necessary by providing prebuilt components that can be reused and extended to accommodate specific additional requirements. The framework integrates control point hardware with a supervisory system by providing the services needed for distributed control such as database persistence, system start-up and configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. The design is interoperable among computers of different kinds and provides plug-in software connections by leveraging a common object request brokering architecture (CORBA) to transparently distribute software objects across the network of computers. Because object broker distribution applied to control systems is relatively new and its inherent performance is roughly threefold less than traditional point

  19. Computer-aided software understanding systems to enhance confidence of scientific codes

    International Nuclear Information System (INIS)

    Sheng, G.; Oeren, T.I.

    1991-01-01

    A unique characteristic of nuclear waste disposal is the very long time span over which the combined engineered and natural containment system must remain effective: hundreds of thousands of years. Since there is no precedent in human history for such an endeavour, simulation with the use of computers is the only means we have of forecasting possible future outcomes quantitatively. The need for reliable models and software to make such forecasts so far into the future is obvious. One of the critical elements necessary to ensure reliability is the degree of reviewability of the computer program. Among others, there are two very important reasons for this. Firstly, if there is to be any chance at all of validating the conceptual models as implemented by the computer code, peer reviewers must be able to see and understand what the program is doing. It is all but impossible to achieve this understanding by just looking at the code due to possible unfamiliarity with the language and often due as well to the length and complexity of the code. Secondly, a thorough understanding of the code is also necessary to carry out code maintenance activities which include among others, error detection, error correction and code modification for purposes of enhancing its performance, functionality or to adapt it to a changed environment. The emerging concepts of computer-aided software understanding and reverse engineering can answer precisely these needs. This paper will discuss the role they can play in enhancing the confidence one has on computer codes and several examples will be provided. Finally a brief discussion of combining state-of-art forward engineering systems with reverse engineering systems will show how powerfully they can contribute to the overall quality assurance of a computer program. (13 refs., 7 figs.)

  20. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  1. Preface (to: Advances in Computer Entertainment)

    NARCIS (Netherlands)

    Romão, Teresa; Nijholt, Antinus; Romão, Teresa; Reidsma, Dennis

    2012-01-01

    These are the proceedings of the 9th International Conference on Advances in Computer Entertainment ACE 2012). ACE has become the leading scientific forum for dissemination of cutting-edge research results in the area of entertainment computing. Interactive entertainment is one of the most vibrant

  2. Overview of the ANS [American Nuclear Society] mathematics and computation software standards

    International Nuclear Information System (INIS)

    Smetana, A.O.

    1991-01-01

    The Mathematics and Computations Division of the American Nuclear Society sponsors the ANS-10 Standards Subcommittee. This subcommittee, which is part of the ANS Standards Committee, currently maintains four ANSI/ANS software standards. These standards are: Recommended Programming Practices to Facilitate the Portability of Scientific Computer Programs, ANS-10.2; Guidelines for the Documentation of Computer Software, ANS-10.3; Guidelines for the Verification and Validation of Scientific and Engineering Computer Programs for the Nuclear Industry, ANS-10.4; and Guidelines for Accommodating User Needs in Computer Program Development, ANS-10.5. 5 refs

  3. Scientific activities 1977 and 1978

    International Nuclear Information System (INIS)

    1980-01-01

    The scientific activities and achievements of the Nuclear Research Center Democritus for the years 1977 and 1978 are presented in the form of a list of 79 projects giving title, objectives, commencement year, responsible of each project and the pertaining lists of publications. The 14 chapters of this work cover the activities of the main Divisions of the Democritus NRC: Exploration of Radioactive Minerals, Computer Center, Environmental Radioactivity, Chemistry, Physics, Biology, Soil Science, Electronics, Reactor, Health Physics, Radioisotopes, Technological Applications and Medical Service. (T.A.)

  4. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for

  5. The Julia programming language: the future of scientific computing

    Science.gov (United States)

    Gibson, John

    2017-11-01

    Julia is an innovative new open-source programming language for high-level, high-performance numerical computing. Julia combines the general-purpose breadth and extensibility of Python, the ease-of-use and numeric focus of Matlab, the speed of C and Fortran, and the metaprogramming power of Lisp. Julia uses type inference and just-in-time compilation to compile high-level user code to machine code on the fly. A rich set of numeric types and extensive numerical libraries are built-in. As a result, Julia is competitive with Matlab for interactive graphical exploration and with C and Fortran for high-performance computing. This talk interactively demonstrates Julia's numerical features and benchmarks Julia against C, C++, Fortran, Matlab, and Python on a spectral time-stepping algorithm for a 1d nonlinear partial differential equation. The Julia code is nearly as compact as Matlab and nearly as fast as Fortran. This material is based upon work supported by the National Science Foundation under Grant No. 1554149.

  6. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  7. Preliminary Study on Preservation of Scientific and Nuclear Equipment in Nuclear Malaysia

    International Nuclear Information System (INIS)

    Mohd Fauzi Haris; Siti Nurbahyah Hamdan; Saipo Bahari Abdul Ratan

    2016-01-01

    Scientific achievements and technological advances in today undeniably has become more sophisticated with the latest capabilities that have helped human life in either routine or scientific problem solving. But each time the power that created them, every tool, particularly equipment used for experimental and scientific tasks also face the risk of aging and damage either the damage is expected or unexpected. Circumstance or situation depends on the use and care of such a scientific tool. To ensure a scientific tool that can be used for a long shelf life, the appropriate care and maintenance should be concerned about. This paper discusses matters related to scientific equipment and Nuclear Malaysia Nuclear specializes in equipment that is connected to and controlled by a computer in terms of care, use and future planning to ensure that it can be used for an appropriate period of time. (author)

  8. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  9. CERN scientific book fair 2010

    CERN Document Server

    CERN Library

    2010-01-01

    The CERN Bookshop and CERN Library invite you to attend the 2010 CERN Book Fair, a two-day scientific event offering you the opportunity to meet key publishers and to browse and purchase books at significant discounts.   Some twelve companies will be present and will bring with them a selection of titles in physics, technology, mathematics, engineering, computing and popular science. You are welcome to come along and meet the publishers’ representatives or simply have a look to the books on offer. The Fair will take place in the Main Building (bldg. 500) on the ground floor near the Restaurant 1 on Tuesday 7th and Wednesday 8th September. Participating or represented publishers include: Cambridge University Press, EPFL Press – PPUR, Oxford University Press, Imperial College Press, McGraw-Hill, Oxford University Press, Pearson Education, Princeton University Press, Springer, Taylor and Francis, Wiley, World Scientific. Fair opening times: Tuesday 7 September 9:00 &ndash...

  10. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  11. Grid computing faces IT industry test

    CERN Multimedia

    Magno, L

    2003-01-01

    Software company Oracle Corp. unveiled it's Oracle 10g grid computing platform at the annual OracleWorld user convention in San Francisco. It gave concrete examples of how grid computing can be a viable option outside the scientific community where the concept was born (1 page).

  12. Using cloud-computing applications to support collaborative scientific inquiry: Examining pre-service teachers’ perceived barriers towards integration / Utilisation d'applications infonuagiques pour appuyer la recherche scientifique collaborative

    OpenAIRE

    Joel Donna; Brant G Miller

    2013-01-01

    Technology plays a crucial role in facilitating collaboration within the scientific community. Cloud-computing applications can be used to model such collaboration and support inquiry within the secondary science classroom. Little is known about pre-service teachers’ beliefs related to the envisioned use of this technology in their teaching. These beliefs may influence future integration. This study finds several first-order barriers, such as perceptions that these tools would take too much t...

  13. Scientific Challenges for Understanding the Quantum Universe

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-16

    A workshop titled "Scientific Challenges for Understanding the Quantum Universe" was held December 9-11, 2008, at the Kavli Institute for Particle Astrophysics and Cosmology at the Stanford Linear Accelerator Center-National Accelerator Laboratory. The primary purpose of the meeting was to examine how computing at the extreme scale can contribute to meeting forefront scientific challenges in particle physics, particle astrophysics and cosmology. The workshop was organized around five research areas with associated panels. Three of these, "High Energy Theoretical Physics," "Accelerator Simulation," and "Experimental Particle Physics," addressed research of the Office of High Energy Physics’ Energy and Intensity Frontiers, while the"Cosmology and Astrophysics Simulation" and "Astrophysics Data Handling, Archiving, and Mining" panels were associated with the Cosmic Frontier.

  14. Mobile Computing: The Emerging Technology, Sensing, Challenges and Applications

    International Nuclear Information System (INIS)

    Bezboruah, T.

    2010-12-01

    The mobile computing is a computing system in which a computer and all necessary accessories like files and software are taken out to the field. It is a system of computing through which it is being able to use a computing device even when someone being mobile and therefore changing location. The portability is one of the important aspects of mobile computing. The mobile phones are being used to gather scientific data from remote and isolated places that could not be possible to retrieve by other means. The scientists are initiating to use mobile devices and web-based applications to systematically explore interesting scientific aspects of their surroundings, ranging from climate change, environmental pollution to earthquake monitoring. This mobile revolution enables new ideas and innovations to spread out more quickly and efficiently. Here we will discuss in brief about the mobile computing technology, its sensing, challenges and the applications. (author)

  15. Scientific Computing: A New Way of Looking at Mathematics

    Indian Academy of Sciences (India)

    Amiya Kumar Pani

    repose faith on the numbers being crunched. To design and develop reliable and efficient algorithms for numerical solutions to PDEs. By reliability, we mean that for a given tolerance and measurement, the computed solution stays near to the exact unknown solution within the prescribed tolerance with respect to the given.

  16. Scientific instruments, scientific progress and the cyclotron

    International Nuclear Information System (INIS)

    Baird, David; Faust, Thomas

    1990-01-01

    Philosophers speak of science in terms of theory and experiment, yet when they speak of the progress of scientific knowledge they speak in terms of theory alone. In this article it is claimed that scientific knowledge consists of, among other things, scientific instruments and instrumental techniques and not simply of some kind of justified beliefs. It is argued that one aspect of scientific progress can be characterized relatively straightforwardly - the accumulation of new scientific instruments. The development of the cyclotron is taken to illustrate this point. Eight different activities which promoted the successful completion of the cyclotron are recognised. The importance is in the machine rather than the experiments which could be run on it and the focus is on how the cyclotron came into being, not how it was subsequently used. The completed instrument is seen as a useful unit of scientific progress in its own right. (UK)

  17. Vdebug: debugging tool for parallel scientific programs. Design report on vdebug

    International Nuclear Information System (INIS)

    Matsuda, Katsuyuki; Takemiya, Hiroshi

    2000-02-01

    We report on a debugging tool called vdebug which supports debugging work for parallel scientific simulation programs. It is difficult to debug scientific programs with an existing debugger, because the volume of data generated by the programs is too large for users to check data in characters. Usually, the existing debugger shows data values in characters. To alleviate it, we have developed vdebug which enables to check the validity of large amounts of data by showing these data values visually. Although targets of vdebug have been restricted to sequential programs, we have made it applicable to parallel programs by realizing the function of merging and visualizing data distributed on programs on each computer node. Now, vdebug works on seven kinds of parallel computers. In this report, we describe the design of vdebug. (author)

  18. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    Science.gov (United States)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  19. The StratusLab cloud distribution: Use-cases and support for scientific applications

    Science.gov (United States)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take

  20. Computer Hardware, Advanced Mathematics and Model Physics pilot project final report

    International Nuclear Information System (INIS)

    1992-05-01

    The Computer Hardware, Advanced Mathematics and Model Physics (CHAMMP) Program was launched in January, 1990. A principal objective of the program has been to utilize the emerging capabilities of massively parallel scientific computers in the challenge of regional scale predictions of decade-to-century climate change. CHAMMP has already demonstrated the feasibility of achieving a 10,000 fold increase in computational throughput for climate modeling in this decade. What we have also recognized, however, is the need for new algorithms and computer software to capitalize on the radically new computing architectures. This report describes the pilot CHAMMP projects at the DOE National Laboratories and the National Center for Atmospheric Research (NCAR). The pilot projects were selected to identify the principal challenges to CHAMMP and to entrain new scientific computing expertise. The success of some of these projects has aided in the definition of the CHAMMP scientific plan. Many of the papers in this report have been or will be submitted for publication in the open literature. Readers are urged to consult with the authors directly for questions or comments about their papers