WorldWideScience

Sample records for high computational requirements

  1. Biomedical Requirements for High Productivity Computing Systems

    Science.gov (United States)

    2005-04-01

    operations performed by embedded C ++ libraries. While Python is not currently used directly for numerically intensive work it would be quite desirable if...performed by embedded C ++ libraries. The availability of a higher performance python solution is highly desirable, i.e. – a python compiler or better JIT...be desirable. - Virtually all high-level programming is now done in Python with numerically intensive operations performed by embedded C ++ libraries

  2. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  3. Computer Science in High School Graduation Requirements. ECS Education Trends

    Science.gov (United States)

    Zinth, Jennifer Dounay

    2015-01-01

    Computer science and coding skills are widely recognized as a valuable asset in the current and projected job market. The Bureau of Labor Statistics projects 37.5 percent growth from 2012 to 2022 in the "computer systems design and related services" industry--from 1,620,300 jobs in 2012 to an estimated 2,229,000 jobs in 2022. Yet some…

  4. Computer Science in High School Graduation Requirements. ECS Education Trends (Updated)

    Science.gov (United States)

    Zinth, Jennifer

    2016-01-01

    Allowing high school students to fulfill a math or science high school graduation requirement via a computer science credit may encourage more student to pursue computer science coursework. This Education Trends report is an update to the original report released in April 2015 and explores state policies that allow or require districts to apply…

  5. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-04-30

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  6. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2015-01-20

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  7. Computing requirements for high energy physics experiments at the LHC collider

    CERN Document Server

    Witek, Mariusz

    2002-01-01

    In this article the requirements for the future experiments of elementary particle physics are discussed. The nature of physics phenomena expected at the LHC collider at CERN leads to an unprecedented scale of the computing infrastructure for the data storage and analysis. The possible solution is based on the distributed computing model, and is presented within the context of the global unification of the computer resources as proposed by the GRID projects. (7 refs).

  8. High Performance Computing and Storage Requirements for Biological and Environmental Research Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2013-05-01

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In addition to large-­scale computing and storage resources NERSC provides support and expertise that help scientists make efficient use of its systems. The latest review revealed several key requirements, in addition to achieving its goal of characterizing BER computing and storage needs.

  9. Compact Differential Evolution Light: High Performance Despite Limited Memory Requirement and Modest Computational Overhead

    Institute of Scientific and Technical Information of China (English)

    Giovanni Iacca; Fabio Caraffini; Ferrante Neri

    2012-01-01

    Compact algorithms are Estimation of Distribution Algorithms which mimic the behavior of population-based algorithms by means of a probabilistic representation of the population of candidate solutions.These algorithms have a similar behaviour with respect to population-based algorithms but require a much smaller memory.This feature is crucially important in some engineering applications,especially in robotics.A high performance compact algorithm is the compact Differential Evolution (cDE) algorithm.This paper proposes a novel implementation of cDE,namely compact Differential Evolution light (cDElight),to address not only the memory saving necessities but also real-time requirements.cDElight employs two novel algorithmic modifications for employing a smaller computational overhead without a performance loss,with respect to cDE.Numerical results,carried out on a broad set of test problems,show that cDElight,despite its minimal hardware requirements,does not deteriorate the performance of cDE and thus is competitive with other memory saving and population-based algorithms.An application in the field of mobile robotics highlights the usability and advantages of the proposed approach.

  10. Meeting the security requirements of electronic medical records in the ERA of high-speed computing.

    Science.gov (United States)

    Alanazi, H O; Zaidan, A A; Zaidan, B B; Kiah, M L Mat; Al-Bakri, S H

    2015-01-01

    This study has two objectives. First, it aims to develop a system with a highly secured approach to transmitting electronic medical records (EMRs), and second, it aims to identify entities that transmit private patient information without permission. The NTRU and the Advanced Encryption Standard (AES) cryptosystems are secured encryption methods. The AES is a tested technology that has already been utilized in several systems to secure sensitive data. The United States government has been using AES since June 2003 to protect sensitive and essential information. Meanwhile, NTRU protects sensitive data against attacks through the use of quantum computers, which can break the RSA cryptosystem and elliptic curve cryptography algorithms. A hybrid of AES and NTRU is developed in this work to improve EMR security. The proposed hybrid cryptography technique is implemented to secure the data transmission process of EMRs. The proposed security solution can provide protection for over 40 years and is resistant to quantum computers. Moreover, the technique provides the necessary evidence required by law to identify disclosure or misuse of patient records. The proposed solution can effectively secure EMR transmission and protect patient rights. It also identifies the source responsible for disclosing confidential patient records. The proposed hybrid technique for securing data managed by institutional websites must be improved in the future.

  11. Advanced Scientific Computing Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  12. Requirement emergence computation of networked software

    Institute of Scientific and Technical Information of China (English)

    HE Keqing; LIANG Peng; PENG Rong; LI Bing; LIU Jing

    2007-01-01

    Emergence Computation has become a hot topic in the research of complex systems in recent years.With the substantial increase in scale and complexity of network-based information systems,the uncertain user requirements from the Internet and personalized application requirement result in the frequent change for the software requirement.Meanwhile,the software system with non self-possessed,resource become more and more complex.Furthermore,the interaction and cooperation requirement between software units and running environment in service computing increase the complexity of software systems.The software systems with complex system characteristics are developing into the"Networked Software" with characteristics of change-on-demand and change-with-cooperation.The concepts "programming","compiling" and "running"of software in common sense are extended from "desktop" to "network".The core issue of software engineering is moving to the requirement engineering,which becomes the research focus of complex systemsoftware engineering.In this paper,we present the software network view based on complex system theory,and the concept of networked software and networked requirement.We proposethe challenge problem in the research of emergence computation of networked software requirement.A hierarchical & cooperative Unified requirement modeling framework URF (Unified Requirement Framework) and related RGPS (Role,Goal,Process and Service) meta-models are proposed.Five scales and the evolutionary growth mechanismin requirement emergence computation of networked software are given with focus on user-dominant and domain-oriented requirement,and the rules and predictability in requirement emergence computation are analyzed.A case study in the application of networked e-Business with evolutionary growth based on State design pattern is presented in the end.

  13. Cloud computing security requirements: a systematic review

    NARCIS (Netherlands)

    Iankoulova, Iliana; Daneva, Maya; Rolland, C.; Castro, J.; Pastor, O.

    2012-01-01

    Many publications have dealt with various types of security requirements in cloud computing but not all types have been explored in sufficient depth. It is also hard to understand which types of requirements have been under-researched and which are most investigated. This paper's goal is to provide

  14. White Paper on Institutional Capability Computing Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Kissel, L; McCoy, M G; Seager, M K

    2002-01-29

    This paper documents the need for a rapid, order-of-magnitude increase in the computing infrastructure provided to scientists working in the unclassified domains at Lawrence Livermore National Laboratory. This proposed increase could be viewed as a step in a broader strategy linking hardware evolution to applications development that would take LLNL unclassified computational science to a position of distinction, if not preeminence, by 2006. We believe that it is possible for LLNL institutional scientists to gain access late this year to a new system with a capacity roughly 80% to 200% that of the 12-TF/s (twelve trillion floating-point operations per second) ASCI White system for a cost that is an order of magnitude lower than the White system. This platform could be used for first-class science-of-scale computing and for the development of aggressive, strategically chosen applications that can challenge the near PF/s (petaflop/s, a thousand trillion floating-point operations per second) scale systems ASCI is working to bring to the LLNL unclassified environment in 2005. As the distilled scientific requirements data presented in this document indicate, great computational science is being done at LLNL--the breadth of accomplishment is amazing. The computational efforts make it clear what a unique national treasure this Laboratory has become. While the projects cover a wide and varied application space, they share three elements--they represent truly great science, they have broad impact on the Laboratory's major technical programs, and they depend critically on big computers.

  15. Introduction to High Performance Scientific Computing

    OpenAIRE

    2016-01-01

    The field of high performance scientific computing lies at the crossroads of a number of disciplines and skill sets, and correspondingly, for someone to be successful at using high performance computing in science requires at least elementary knowledge of and skills in all these areas. Computations stem from an application context, so some acquaintance with physics and engineering sciences is desirable. Then, problems in these application areas are typically translated into linear algebraic, ...

  16. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  17. Optimal neural computations require analog processors

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).

  18. Baseline Requirements and Architecture for Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Abdur Rahim Choudhary

    2012-12-01

    Full Text Available Government initiatives such as the “Cloud First” policy are bringing the cloud computing services into Federal Agencies. Further, many of the sectors in the Critical Infrastructure of the nation already use cloud computing. Although cloud computing services are slowly coming to age, many issues remain. This paper therefore takes a closer look at the cloud computing services. First it establishes a baseline by specifying high level requirements for cloud computing services. Next it improves upon the current architecture for the cloud computing services by adding new modules to the current architecture. The new modules are gleaned from an analysis of the telecommunications cloud and security in distributed systems. The new modules include a management and control network, a set of trust domains, and a set of proxies. The improved architecture is more ready for primetime use and supports a richer operational model.

  19. High Performance Computing Today

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  20. PRCA:A highly efficient computing architecture

    Institute of Scientific and Technical Information of China (English)

    Luo Xingguo

    2014-01-01

    Applications can only reach 8 %~15 % of utilization on modern computer systems. There are many obstacles to improving system efficiency. The key root is the conflict between the fixed general computer architecture and the variable requirements of applications. Proactive reconfigurable computing architecture (PRCA) is proposed to improve computing efficiency. PRCA dynamically constructs an efficient computing ar chitecture for a specific application via reconfigurable technology by perceiving requirements,workload and utilization of computing resources. Proactive decision support system (PDSS),hybrid reconfigurable computing array (HRCA) and reconfigurable interconnect (RIC) are intensively researched as the key technologies. The principles of PRCA have been verified with four applications on a test bed. It is shown that PRCA is feasible and highly efficient.

  1. Parallel Computational Fluid Dynamics: Current Status and Future Requirements

    Science.gov (United States)

    Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)

    1994-01-01

    One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.

  2. High assurance services computing

    CERN Document Server

    2009-01-01

    Covers service-oriented technologies in different domains including high assurance systemsAssists software engineers from industry and government laboratories who develop mission-critical software, and simultaneously provides academia with a practitioner's outlook on the problems of high-assurance software development

  3. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  4. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Riley, Katherine [Argonne National Lab. (ANL), Argonne, IL (United States); Williams, Tim [Argonne National Lab. (ANL), Argonne, IL (United States); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-31

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. JJ Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greater — than that available currently. JJ The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. JJ Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. JJ A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. JJ Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be

  5. Architectural requirements for the Red Storm computing system.

    Energy Technology Data Exchange (ETDEWEB)

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  6. 12 CFR 204.4 - Computation of required reserves.

    Science.gov (United States)

    2010-01-01

    ... RESERVE REQUIREMENTS OF DEPOSITORY INSTITUTIONS (REGULATION D) § 204.4 Computation of required reserves. (a) In determining the reserve requirement under this part, the amount of cash items in process of... reserves are computed by applying the reserve requirement ratios below to net transaction...

  7. Computer Forensics, Search Strategies, and the Particularity Requirement

    Directory of Open Access Journals (Sweden)

    Wayne Jekot

    2007-04-01

    Full Text Available Assuming that a person subject to a search and seizure of his or her computer has a reasonable expectation of privacy in the contents of the computer, and thus a warrant is required, should the warrant outline a “search strategy”? Or should comprehensive computer searches be permitted? In other words, how should the particularity requirement be applied to computer searches? Correspondingly, what can a forensic examiner do under a warrant while collecting potential evidence from a computer? [...

  8. Factors Affecting Computer Anxiety in High School Computer Science Students.

    Science.gov (United States)

    Hayek, Linda M.; Stephens, Larry

    1989-01-01

    Examines factors related to computer anxiety measured by the Computer Anxiety Index (CAIN). Achievement in two programing courses was inversely related to computer anxiety. Students who had a home computer and had computer experience before high school had lower computer anxiety than those who had not. Lists 14 references. (YP)

  9. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Requirements on high resolution detectors

    Energy Technology Data Exchange (ETDEWEB)

    Koch, A. [European Synchrotron Radiation Facility, Grenoble (France)

    1997-02-01

    For a number of microtomography applications X-ray detectors with a spatial resolution of 1 {mu}m are required. This high spatial resolution will influence and degrade other parameters of secondary importance like detective quantum efficiency (DQE), dynamic range, linearity and frame rate. This note summarizes the most important arguments, for and against those detector systems which could be considered. This article discusses the mutual dependencies between the various figures which characterize a detector, and tries to give some ideas on how to proceed in order to improve present technology.

  11. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  12. Computationally driven, quantitative experiments discover genes required for mitochondrial biogenesis.

    Directory of Open Access Journals (Sweden)

    David C Hess

    2009-03-01

    Full Text Available Mitochondria are central to many cellular processes including respiration, ion homeostasis, and apoptosis. Using computational predictions combined with traditional quantitative experiments, we have identified 100 proteins whose deficiency alters mitochondrial biogenesis and inheritance in Saccharomyces cerevisiae. In addition, we used computational predictions to perform targeted double-mutant analysis detecting another nine genes with synthetic defects in mitochondrial biogenesis. This represents an increase of about 25% over previously known participants. Nearly half of these newly characterized proteins are conserved in mammals, including several orthologs known to be involved in human disease. Mutations in many of these genes demonstrate statistically significant mitochondrial transmission phenotypes more subtle than could be detected by traditional genetic screens or high-throughput techniques, and 47 have not been previously localized to mitochondria. We further characterized a subset of these genes using growth profiling and dual immunofluorescence, which identified genes specifically required for aerobic respiration and an uncharacterized cytoplasmic protein required for normal mitochondrial motility. Our results demonstrate that by leveraging computational analysis to direct quantitative experimental assays, we have characterized mutants with subtle mitochondrial defects whose phenotypes were undetected by high-throughput methods.

  13. Scientific Application Requirements for Leadership Computing at the Exascale

    Energy Technology Data Exchange (ETDEWEB)

    Ahern, Sean [ORNL; Alam, Sadaf R [ORNL; Fahey, Mark R [ORNL; Hartman-Baker, Rebecca J [ORNL; Barrett, Richard F [ORNL; Kendall, Ricky A [ORNL; Kothe, Douglas B [ORNL; Mills, Richard T [ORNL; Sankaran, Ramanan [ORNL; Tharrington, Arnold N [ORNL; White III, James B [ORNL

    2007-12-01

    , possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static

  14. High-Precision Computation and Mathematical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  15. Condor-COPASI: high-throughput computing for biochemical networks

    OpenAIRE

    Kent Edward; Hoops Stefan; Mendes Pedro

    2012-01-01

    Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary experti...

  16. Resource requirements for digital computations on electrooptical systems.

    Science.gov (United States)

    Eshaghian, M M; Panda, D K; Kumar, V K

    1991-03-10

    In this paper we study the resource requirements of electrooptical organizations in performing digital computing tasks. We define a generic model of parallel computation using optical interconnects, called the optical model of computation (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Using this model we derive relationships between information transfer and computational resources in solving a given problem. To illustrate our results, we concentrate on a computationally intensive operation, 2-D digital image convolution. Irrespective of the input/output scheme and the order of computation, we show a lower bound of ?(nw) on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  17. Resource requirements for digital computations on electrooptical systems

    Science.gov (United States)

    Eshaghian, Mary M.; Panda, Dhabaleswar K.; Kumar, V. K. Prasanna

    1991-03-01

    The resource requirements of electrooptical organizations in performing digital computing tasks are studied via a generic model of parallel computation using optical interconnects, called the 'optical model of computation' (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Relationships between information transfer and computational resources in solving a given problem are derived. A computationally intensive operation, two-dimensional digital image convolution is undertaken. Irrespective of the input/output scheme and the order of computation, a lower bound of Omega(nw) is obtained on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  18. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  19. High-performance computers for unmanned vehicles

    Science.gov (United States)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  20. Computational requirements for on-orbit identification of space systems

    Science.gov (United States)

    Hadaegh, Fred Y.

    1988-01-01

    For the future space systems, on-orbit identification (ID) capability will be required to complement on-orbit control, due to the fact that the dynamics of large space structures, spacecrafts, and antennas will not be known sufficiently from ground modeling and testing. The computational requirements for ID of flexible structures such as the space station (SS) or the large deployable reflectors (LDR) are however, extensive due to the large number of modes, sensors, and actuators. For these systems the ID algorithm operations need not be computed in real-time, only in near real-time, or an appropriate mission time. Consequently the space systems will need advanced processors and efficient parallel processing algorithm design and architectures to implement the identification algorithms in near real-time. The MAX computer currently being developed may handle such computational requirements. The purpose is to specify the on-board computational requirements for dynamic and static identification for large space structures. The computational requirements for six ID algorithms are presented in the context of three examples: the JPL/AFAL ground antenna facility, the space station (SS), and the large deployable reflector (LDR).

  1. Tripod of Requirements in Horizontal Heterogeneous Mobile Cloud Computing

    CERN Document Server

    Sanaei, Zohreh; Gani, Abdullah; Khokhar, Rashid Hafeez

    2012-01-01

    Recent trend of mobile computing is emerging toward executing resource-intensive applications in mobile devices regardless of underlying resource restrictions (e.g. limited processor and energy) that necessitate imminent technologies. Prosperity of cloud computing in stationary computers breeds Mobile Cloud Computing (MCC) technology that aims to augment computing and storage capabilities of mobile devices besides conserving energy. However, MCC is more heterogeneous and unreliable (due to wireless connectivity) compare to cloud computing. Problems like variations in OS, data fragmentation, and security and privacy discourage and decelerate implementation and pervasiveness of MCC. In this paper, we describe MCC as a horizontal heterogeneous ecosystem and identify thirteen critical metrics and approaches that influence on mobile-cloud solutions and success of MCC. We divide them into three major classes, namely ubiquity, trust, and energy efficiency and devise a tripod of requirements in MCC. Our proposed trip...

  2. Computer Controlled High Precise,High Voltage Pules Generator

    Institute of Scientific and Technical Information of China (English)

    但果; 邹积岩; 丛吉远; 董恩源

    2003-01-01

    High precise, high voltage pulse generator made up of high-power IGBT and pulse transformers controlled by a computer are described. A simple main circuit topology employed in this pulse generator can reduce the cost meanwhile it still meets special requirements for pulsed electric fields (PEFs) in food process. The pulse generator utilizes a complex programmable logic device (CPLD) to generate trigger signals. Pulse-frequency, pulse-width and pulse-number are controlled via RS232 bus by a computer. The high voltage pulse generator well suits to the application for fluid food non-thermal effect in pulsed electric fields, for it can increase and decrease by the step length 1.

  3. 2005 White Paper on Institutional Capability Computing Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Carnes, B; McCoy, M; Seager, M

    2006-01-20

    This paper documents the need for a significant increase in the computing infrastructure provided to scientists working in the unclassified domains at Lawrence Livermore National Laboratory (LLNL). This need could be viewed as the next step in a broad strategy outlined in the January 2002 White Paper (UCRL-ID-147449) that bears essentially the same name as this document. Therein we wrote: 'This proposed increase could be viewed as a step in a broader strategy linking hardware evolution to applications development that would take LLNL unclassified computational science to a position of distinction if not preeminence by 2006.' This position of distinction has certainly been achieved. This paper provides a strategy for sustaining this success but will diverge from its 2002 predecessor in that it will: (1) Amplify the scientific and external success LLNL has enjoyed because of the investments made in 2002 (MCR, 11 TF) and 2004 (Thunder, 23 TF). (2) Describe in detail the nature of additional investments that are important to meet both the institutional objectives of advanced capability for breakthrough science and the scientists clearly stated request for adequate capacity and more rapid access to moderate-sized resources. (3) Put these requirements in the context of an overall strategy for simulation science and external collaboration. While our strategy for Multiprogrammatic and Institutional Computing (M&IC) has worked well, three challenges must be addressed to assure and enhance our position. The first is that while we now have over 50 important classified and unclassified simulation codes available for use by our computational scientists, we find ourselves coping with high demand for access and long queue wait times. This point was driven home in the 2005 Institutional Computing Executive Group (ICEG) 'Report Card' to the Deputy Director for Science and Technology (DDST) Office and Computation Directorate management. The second challenge is

  4. High Energy Computed Tomographic Inspection of Munitions

    Science.gov (United States)

    2016-11-01

    UNCLASSIFIED UNCLASSIFIED AD-E403 815 Technical Report AREIS-TR-16006 HIGH ENERGY COMPUTED TOMOGRAPHIC INSPECTION OF MUNITIONS...REPORT DATE (DD-MM-YYYY) November 2016 2. REPORT TYPE Final 3. DATES COVERED (From – To) 4. TITLE AND SUBTITLE HIGH ENERGY COMPUTED...otherwise be accomplished by other nondestructive testing methods. 15. SUBJECT TERMS Radiography High energy Computed tomography (CT

  5. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Joubert, Wayne [ORNL; Kothe, Douglas B [ORNL; Nam, Hai Ah [ORNL

    2009-12-01

    In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for the longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be

  6. Computing support for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Avery, P.; Yelton, J. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  7. Establishing performance requirements of computer based systems subject to uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  8. High performance computing for beam physics applications

    Science.gov (United States)

    Ryne, R. D.; Habib, S.

    Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the Advanced Computing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

  9. High-Performance Cloud Computing: A View of Scientific Applications

    CERN Document Server

    Vecchiola, Christian; Buyya, Rajkumar

    2009-01-01

    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure...

  10. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  11. High-Speed Computer-Controlled Switch-Matrix System

    Science.gov (United States)

    Spisz, E.; Cory, B.; Ho, P.; Hoffman, M.

    1985-01-01

    High-speed computer-controlled switch-matrix system developed for communication satellites. Satellite system controlled by onboard computer and all message-routing functions between uplink and downlink beams handled by newly developed switch-matrix system. Message requires only 2-microsecond interconnect period, repeated every millisecond.

  12. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  13. High-Productivity Computing in Computational Physics Education

    Science.gov (United States)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  14. Nuclear Forces and High-Performance Computing: The Perfect Match

    Energy Technology Data Exchange (ETDEWEB)

    Luu, T; Walker-Loud, A

    2009-06-12

    High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We briefly describe the state of the field and describe how progress in this field will impact the greater nuclear physics community. We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field.

  15. High-performance scientific computing

    CERN Document Server

    Berry, Michael W; Gallopoulos, Efstratios

    2012-01-01

    This book presents the state of the art in parallel numerical algorithms, applications, architectures, and system software. The book examines various solutions for issues of concurrency, scale, energy efficiency, and programmability, which are discussed in the context of a diverse range of applications. Features: includes contributions from an international selection of world-class authorities; examines parallel algorithm-architecture interaction through issues of computational capacity-based codesign and automatic restructuring of programs using compilation techniques; reviews emerging applic

  16. SERVICE ORIENTED QUALITY REQUIREMENT FRAMEWORK FOR CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Madhushi Rathnaayke

    2015-12-01

    Full Text Available This research paper introduces a framework to identify the quality requirements of cloud computing services. It considered two dominant sub-layers; functional layer and runtime layer against cloud characteristics. SERVQUAL model attributes and the opinions of the industry experts were used to derive the quality constructs in cloud computing environment. The framework gives proper identification of cloud computing service quality expectations of users. The validity of the framework was evaluated by using questionnaire based survey. Partial least squares-structural equation modelling (PLS-SEM technique was used to evaluate the outcome. The research findings shows that the significance of functional layer is higher than runtime layer and prioritized quality factors of two layers are Service time, Information and data security, Recoverability, Service Transparency, and Accessibility.

  17. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  18. Purple Computational Environment With Mappings to ACE Requirements for the General Availability User Environment Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Barney, B; Shuler, J

    2006-08-21

    Purple is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Lawrence Livermore National Laboratory (LLNL). The Purple Computational Environment documents the capabilities and the environment provided for the FY06 LLNL Level 1 General Availability Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories, but also documents needs of the LLNL and Alliance users working in the unclassified environment. Additionally, the Purple Computational Environment maps the provided capabilities to the Trilab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the General Availability user environment capabilities of the ASC community. Appendix A lists these requirements and includes a description of ACE requirements met and those requirements that are not met for each section of this document. The Purple Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the Tri-lab community.

  19. Human Computer Interface Design Criteria. Volume 1. User Interface Requirements

    Science.gov (United States)

    2010-03-19

    2 entitled Human Computer Interface ( HCI )Design Criteria Volume 1: User Interlace Requirements which contains the following major changes from...MISSILE SYSTEMS CENTER Air Force Space Command 483 N. Aviation Blvd. El Segundo, CA 90245 4. This standard has been approved for use on all Space and...and efficient model of how the system works and can generalize this knowledge to other systems. According to Mayhew in Principles and Guidelines in

  20. China's High Performance Computer Standard Commission Established

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ China's High Performance Computer Standard Commission was established on March 28, 2007, under the guidance of the Science and Technology Bureau of the Ministry of Information Industry. It will prepare relevant professional standards on high performance computers to break through the monopoly in the field by foreign manufacturers and vendors.

  1. High-throughput computing in the sciences.

    Science.gov (United States)

    Morgan, Mark; Grimshaw, Andrew

    2009-01-01

    While it is true that the modern computer is many orders of magnitude faster than that of yesteryear; this tremendous growth in CPU clock rates is now over. Unfortunately, however, the growth in demand for computational power has not abated; whereas researchers a decade ago could simply wait for computers to get faster, today the only solution to the growing need for more powerful computational resource lies in the exploitation of parallelism. Software parallelization falls generally into two broad categories--"true parallel" and high-throughput computing. This chapter focuses on the latter of these two types of parallelism. With high-throughput computing, users can run many copies of their software at the same time across many different computers. This technique for achieving parallelism is powerful in its ability to provide high degrees of parallelism, yet simple in its conceptual implementation. This chapter covers various patterns of high-throughput computing usage and the skills and techniques necessary to take full advantage of them. By utilizing numerous examples and sample codes and scripts, we hope to provide the reader not only with a deeper understanding of the principles behind high-throughput computing, but also with a set of tools and references that will prove invaluable as she explores software parallelism with her own software applications and research.

  2. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  3. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  4. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a

  5. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    Energy Technology Data Exchange (ETDEWEB)

    DOE Office of Science, Biological and Environmental Research Program Office (BER),

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  6. Identifying Nursing Computer Training Requirements using Web-based Assessment

    Directory of Open Access Journals (Sweden)

    Naser Ghazi

    2011-12-01

    Full Text Available Our work addresses issues of inefficiency and ineffectiveness in the training of nurses in computer literacy by developing an adaptive questionnaire system. This system works to identify the most effective training modules by evaluating applicants for pre-training and post-training. Our system, Systems Knowledge Assessment Tool (SKAT, aims to increase training proficiency, decrease training time and reduce costs associated with training by identifying areas of training required, and those which are not required for training, targeted to each individual. Based on the project’s requirements, a number of HTML documents were designed to be used as templates in the implementation stage. During this stage, the milestone principle was used, in which a series of coding and testing was performed to generate an error-free product.The decision-making process and it is components, as well as knowing the priority of each attribute in the application is responsible for determining the required training for each applicant. Thus, the decision-making process is an essential aspect of system design and greatly affects the training results of the applicant. The SKAT system has been evaluated to ensure that the system meets the project’s requirements. The evaluation stage was an important part of the project and required a number of nurses with different roles to evaluate the system. Based on their feedback, changes were made.

  7. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  8. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  9. Computer-aided hepatic tumour ablation requirements and preliminary results

    CERN Document Server

    Voirin, D; Amavizca, M; Letoublon, C; Troccaz, J; Voirin, David; Payan, Yohan; Amavizca, Miriam; Letoublon, Christian; Troccaz, Jocelyne

    2002-01-01

    Surgical resection of hepatic tumours is not always possible, since it depends on different factors, among which their location inside the liver functional segments. Alternative techniques consist in local use of chemical or physical agents to destroy the tumour. Radio frequency and cryosurgical ablations are examples of such alternative techniques that may be performed percutaneously. This requires a precise localisation of the tumour placement during ablation. Computer-assisted surgery tools may be used in conjunction with these new ablation techniques to improve the therapeutic efficiency, whilst they benefit from minimal invasiveness. This paper introduces the principles of a system for computer-assisted hepatic tumour ablation and describes preliminary experiments focusing on data registration evaluation. To keep close to conventional protocols, we consider registration of pre-operative CT or MRI data to intra-operative echographic data.

  10. Indications for quantum computation requirements from comparative brain analysis

    Science.gov (United States)

    Bernroider, Gustav; Baer, Wolfgang

    2010-04-01

    Whether or not neuronal signal properties can engage 'non-trivial', i.e. functionally significant, quantum properties, is the subject of an ongoing debate. Here we provide evidence that quantum coherence dynamics can play a functional role in ion conduction mechanism with consequences on the shape and associative character of classical membrane signals. In particular, these new perspectives predict that a specific neuronal topology (e.g. the connectivity pattern of cortical columns in the primate brain) is less important and not really required to explain abilities in perception and sensory-motor integration. Instead, this evidence is suggestive for a decisive role of the number and functional segregation of ion channel proteins that can be engaged in a particular neuronal constellation. We provide evidence from comparative brain studies and estimates of computational capacity behind visual flight functions suggestive for a possible role of quantum computation in biological systems.

  11. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  12. Requirements for supercomputing in energy research: The transition to massively parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  13. Requirements for the evaluation of computational speech segregation systems.

    Science.gov (United States)

    May, Tobias; Dau, Torsten

    2014-12-01

    Recent studies on computational speech segregation reported improved speech intelligibility in noise when estimating and applying an ideal binary mask with supervised learning algorithms. However, an important requirement for such systems in technical applications is their robustness to acoustic conditions not considered during training. This study demonstrates that the spectro-temporal noise variations that occur during training and testing determine the achievable segregation performance. In particular, such variations strongly affect the identification of acoustical features in the system associated with perceptual attributes in speech segregation. The results could help establish a framework for a systematic evaluation of future segregation systems.

  14. Requirements for the evaluation of computational speech segregation systems

    DEFF Research Database (Denmark)

    May, Tobias; Dau, Torsten

    2014-01-01

    Recent studies on computational speech segregation reported improved speech intelligibility in noise when estimating and applying an ideal binary mask with supervised learning algorithms. However, an important requirement for such systems in technical applications is their robustness to acoustic...... conditions not considered during training. This study demonstrates that the spectro-temporal noise variations that occur during training and testing determine the achievable segregation performance. In particular, such variations strongly affect the identification of acoustical features in the system...... associated with perceptual attributes in speech segregation. The results could help establish a framework for a systematic evaluation of future segregation systems....

  15. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  16. High School Physics and the Affordable Computer.

    Science.gov (United States)

    Harvey, Norman L.

    1978-01-01

    Explains how the computer was used in a high school physics course; Project Physics program and individualized study PSSC physics program. Evaluates the capabilities and limitations of a $600 microcomputer system. (GA)

  17. The Impact of High Speed Machining on Computing and Automation

    Institute of Scientific and Technical Information of China (English)

    KKB Hon; BT Hang Tuah Baharudin

    2006-01-01

    Machine tool technologies, especially Computer Numerical Control (CNC) High Speed Machining (HSM) have emerged as effective mechanisms for Rapid Tooling and Manufacturing applications. These new technologies are attractive for competitive manufacturing because of their technical advantages, i.e. a significant reduction in lead-time, high product accuracy, and good surface finish. However, HSM not only stimulates advancements in cutting tools and materials, it also demands increasingly sophisticated CAD/CAM software, and powerful CNC controllers that require more support technologies. This paper explores the computational requirement and impact of HSM on CNC controller, wear detection,look ahead programming, simulation, and tool management.

  18. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  19. Dawning4000A high performance computer

    Institute of Scientific and Technical Information of China (English)

    SUN Ninghui; MENG Dan

    2007-01-01

    Dawning4000A is an AMD Opteron-based Linux Cluster with 11.2Tflops peak performance and 8.06Tflops Linpack performance.It was developed for the Shanghai Supercomputer Center (SSC)as one of the computing power stations of the China National Grid (CNGrid)project.The Massively Cluster Computer (MCC)architecture is proposed to put added-value on the industry standard system.Several grid-enabling components are developed to support the running environment of the CNGrid.It is an achievement for a high performance computer with the low-cost approach.

  20. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  1. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  2. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  3. High performance computing: Clusters, constellations, MPPs, and future directions

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

    2003-06-10

    Last year's paper by Bell and Gray [1] examined past trends in high performance computing and asserted likely future directions based on market forces. While many of the insights drawn from this perspective have merit and suggest elements governing likely future directions for HPC, there are a number of points put forth that we feel require further discussion and, in certain cases, suggest alternative, more likely views. One area of concern relates to the nature and use of key terms to describe and distinguish among classes of high end computing systems, in particular the authors use of ''cluster'' to relate to essentially all parallel computers derived through the integration of replicated components. The taxonomy implicit in their previous paper, while arguable and supported by some elements of our community, fails to provide the essential semantic discrimination critical to the effectiveness of descriptive terms as tools in managing the conceptual space of consideration. In this paper, we present a perspective that retains the descriptive richness while providing a unifying framework. A second area of discourse that calls for additional commentary is the likely future path of system evolution that will lead to effective and affordable Petaflops-scale computing including the future role of computer centers as facilities for supporting high performance computing environments. This paper addresses the key issues of taxonomy, future directions towards Petaflops computing, and the important role of computer centers in the 21st century.

  4. Computational aspects of steel fracturing pertinent to naval requirements.

    Science.gov (United States)

    Matic, Peter; Geltmacher, Andrew; Rath, Bhakta

    2015-03-28

    Modern high strength and ductile steels are a key element of US Navy ship structural technology. The development of these alloys spurred the development of modern structural integrity analysis methods over the past 70 years. Strength and ductility provided the designers and builders of navy surface ships and submarines with the opportunity to reduce ship structural weight, increase hull stiffness, increase damage resistance, improve construction practices and reduce maintenance costs. This paper reviews how analytical and computational tools, driving simulation methods and experimental techniques, were developed to provide ongoing insights into the material, damage and fracture characteristics of these alloys. The need to understand alloy fracture mechanics provided unique motivations to measure and model performance from structural to microstructural scales. This was done while accounting for the highly nonlinear behaviours of both materials and underlying fracture processes. Theoretical methods, data acquisition strategies, computational simulation and scientific imaging were applied to increasingly smaller scales and complex materials phenomena under deformation. Knowledge gained about fracture resistance was used to meet minimum fracture initiation, crack growth and crack arrest characteristics as part of overall structural integrity considerations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  6. Software Synthesis for High Productivity Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bodik, Rastislav [Univ. of Washington, Seattle, WA (United States)

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  7. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  8. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  9. Linear algebra on high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sorensen, D.C.

    1986-01-01

    This paper surveys work recently done at Argonne National Laboratory in an attempt to discover ways to construct numerical software for high-performance computers. The numerical algorithms are taken from several areas of numerical linear algebra. We discuss certain architectural features of advanced-computer architectures that will affect the design of algorithms. The technique of restructuring algorithms in terms of certain modules is reviewed. This technique has proved successful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The module technique is demonstrably effective for dense linear algebra problems. However, in the case of sparse and structured problems it may be difficult to identify general modules that will be as effective. New algorithms have been devised for certain problems in this category. We present examples in three important areas: banded systems, sparse QR factorization, and symmetric eigenvalue problems. 32 refs., 10 figs., 6 tabs.

  10. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  11. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  12. High Resolution Muon Computed Tomography at Neutrino Beam Facilities

    CERN Document Server

    Suerfu, Burkhant

    2015-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pio...

  13. High performance computing and communications panel report

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-01

    In FY92, a presidential initiative entitled High Performance Computing and Communications (HPCC) was launched, aimed at securing U.S. preeminence in high performance computing and related communication technologies. The stated goal of the initiative is threefold: extend U.S. technological leadership in high performance computing and computer communications; provide wide dissemination and application of the technologies; and spur gains in U.S. productivity and industrial competitiveness, all within the context of the mission needs of federal agencies. Because of the importance of the HPCC program to the national well-being, especially its potential implication for industrial competitiveness, the Assistant to the President for Science and Technology has asked that the President's Council of Advisors in Science and Technology (PCAST) establish a panel to advise PCAST on the strengths and weaknesses of the HPCC program. The report presents a program analysis based on strategy, balance, management, and vision. Both constructive recommendations for program improvement and positive reinforcement of successful program elements are contained within the report.

  14. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  15. High-power LED package requirements

    Science.gov (United States)

    Wall, Frank; Martin, Paul S.; Harbers, Gerard

    2004-01-01

    Power LEDs have evolved from simple indicators into illumination devices. For general lighting applications, where the objective is to light up an area, white LED arrays have been utilized to serve that function. Cost constraints will soon drive the industry to provide a discrete lighting solution. Early on, that will mean increasing the power densities while quantum efficiencies are addressed. For applications such as automotive headlamps & projection, where light needs to be tightly collimated, or controlled, arrays of die or LEDs will not be able to satisfy the requirements & limitations defined by etendue. Ultimately, whether a luminaire requires a small source with high luminance, or light spread over a general area, economics will force the evolution of the illumination LED into a compact discrete high power package. How the customer interfaces with this new package should be an important element considered early on in the design cycle. If an LED footprint of adequate size is not provided, it may prove impossible for the customer, or end user, to get rid of the heat in a manner sufficient to prevent premature LED light output degradation. Therefore it is critical, for maintaining expected LED lifetime & light output, that thermal performance parameters be defined, by design, at the system level, which includes heat sinking methods & interface materials or methdology.

  16. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    Science.gov (United States)

    2004-07-01

    and other energy feedstock more efficiently. Signal Transduction Pathways Develop atomic-level computational models and simulations of complex...biomolecules to explain and predict cell signal pathways and their disrupters. Yield understanding of initiation of cancer and other diseases and their...calculations also introduces a requirement for a high degree of internodal connectivity (high bisection bandwidth). These needs cannot be met simply by

  17. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  18. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  19. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  20. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  1. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2014-01-01

    Full Text Available Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  2. A High-Performance Communication Service for Parallel Servo Computing

    Directory of Open Access Journals (Sweden)

    Cheng Xin

    2010-11-01

    Full Text Available Complexity of algorithms for the servo control in the multi-dimensional, ultra-precise stage application has made multi-processor parallel computing technology needed. Considering the specific communication requirements in the parallel servo computing, we propose a communication service scheme based on VME bus, which provides high-performance data transmission and precise synchronization trigger support for the processors involved. Communications service is implemented on both standard VME bus and user-defined Internal Bus (IB, and can be redefined online. This paper introduces parallel servo computing architecture and communication service, describes structure and implementation details of each module in the service, and finally provides data transmission model and analysis. Experimental results show that communication services can provide high-speed data transmission with sub-nanosecond-level error of transmission latency, and synchronous trigger with nanosecond-level synchronization error. Moreover, the performance of communication service is not affected by the increasing number of processors.

  3. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  4. High performance computing for classic gravitational N-body systems

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2009-01-01

    The role of gravity is crucial in astrophysics. It determines the evolution of any system, over an enormous range of time and space scales. Astronomical stellar systems as composed by N interacting bodies represent examples of self-gravitating systems, usually treatable with the aid of newtonian gravity but for particular cases. In this note I will briefly discuss some of the open problems in the dynamical study of classic self-gravitating N-body systems, over the astronomical range of N. I will also point out how modern research in this field compulsorily requires a heavy use of large scale computations, due to the contemporary requirement of high precision and high computational speed.

  5. Software Requirements for a System to Compute Mean Failure Cost

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology

    2010-01-01

    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder. We also demonstrated this infrastructure through the results of security breakdowns for the ecommerce case. In this paper, we illustrate this infrastructure by an application that supports the computation of the Mean Failure Cost (MFC) for each stakeholder.

  6. System Requirements Analysis for a Computer-based Procedure in a Research Reactor Facility

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jaek Wan; Jang, Gwi Sook; Seo, Sang Moon; Shin, Sung Ki [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    This can address many of the routine problems related to human error in the use of conventional, hard-copy operating procedures. An operating supporting system is also required in a research reactor. A well-made CBP can address the staffing issues of a research reactor and reduce the human errors by minimizing the operator's routine tasks. A CBP for a research reactor has not been proposed yet. Also, CBPs developed for nuclear power plants have powerful and various technical functions to cover complicated plant operation situations. However, many of the functions may not be required for a research reactor. Thus, it is not reasonable to apply the CBP to a research reactor directly. Also, customizing of the CBP is not cost-effective. Therefore, a compact CBP should be developed for a research reactor. This paper introduces high level requirements derived by the system requirements analysis activity as the first stage of system implementation. Operation support tools are under consideration for application to research reactors. In particular, as a full digitalization of the main control room, application of a computer-based procedure system has been required as a part of man-machine interface system because it makes an impact on the operating staffing and human errors of a research reactor. To establish computer-based system requirements for a research reactor, this paper addressed international standards and previous practices on nuclear plants.

  7. 78 FR 47015 - Software Requirement Specifications for Digital Computer Software Used in Safety Systems of...

    Science.gov (United States)

    2013-08-02

    ... COMMISSION Software Requirement Specifications for Digital Computer Software Used in Safety Systems of... 1 of RG 1.172, ``Software Requirement Specifications for Digital Computer Software used in Safety... (IEEE) Standard (Std.) 830-1998, ``IEEE Recommended Practice for Software Requirements Specifications...

  8. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    Energy Technology Data Exchange (ETDEWEB)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-09-05

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above.

  9. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  10. The path toward HEP High Performance Computing

    Science.gov (United States)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  11. Computing High Accuracy Power Spectra with Pico

    CERN Document Server

    Fendt, William A

    2007-01-01

    This paper presents the second release of Pico (Parameters for the Impatient COsmologist). Pico is a general purpose machine learning code which we have applied to computing the CMB power spectra and the WMAP likelihood. For this release, we have made improvements to the algorithm as well as the data sets used to train Pico, leading to a significant improvement in accuracy. For the 9 parameter nonflat case presented here Pico can on average compute the TT, TE and EE spectra to better than 1% of cosmic standard deviation for nearly all $\\ell$ values over a large region of parameter space. Performing a cosmological parameter analysis of current CMB and large scale structure data, we show that these power spectra give very accurate 1 and 2 dimensional parameter posteriors. We have extended Pico to allow computation of the tensor power spectrum and the matter transfer function. Pico runs about 1500 times faster than CAMB at the default accuracy and about 250,000 times faster at high accuracy. Training Pico can be...

  12. Minimizing makespan in flowshops with pallet requirements: computational complexity

    NARCIS (Netherlands)

    M. Wang (Michael); S. Sethi (Suresh); C. Sriskandarajah (Chelliah); S.L. van de Velde (Steef)

    1997-01-01

    textabstractStudies the minimization in flowshops with pallet requirements. Importance of pallets in automated or flexible manufacturing environments; Mounting and dismounting of work pieces; Planning problems involved.

  13. Moving the mountain: analysis of the effort required to transform comparative anatomy into computable anatomy.

    Science.gov (United States)

    Dahdul, Wasila; Dececchi, T Alexander; Ibrahim, Nizar; Lapp, Hilmar; Mabee, Paula

    2015-01-01

    The diverse phenotypes of living organisms have been described for centuries, and though they may be digitized, they are not readily available in a computable form. Using over 100 morphological studies, the Phenoscape project has demonstrated that by annotating characters with community ontology terms, links between novel species anatomy and the genes that may underlie them can be made. But given the enormity of the legacy literature, how can this largely unexploited wealth of descriptive data be rendered amenable to large-scale computation? To identify the bottlenecks, we quantified the time involved in the major aspects of phenotype curation as we annotated characters from the vertebrate phylogenetic systematics literature. This involves attaching fully computable logical expressions consisting of ontology terms to the descriptions in character-by-taxon matrices. The workflow consists of: (i) data preparation, (ii) phenotype annotation, (iii) ontology development and (iv) curation team discussions and software development feedback. Our results showed that the completion of this work required two person-years by a team of two post-docs, a lead data curator, and students. Manual data preparation required close to 13% of the effort. This part in particular could be reduced substantially with better community data practices, such as depositing fully populated matrices in public repositories. Phenotype annotation required ∼40% of the effort. We are working to make this more efficient with Natural Language Processing tools. Ontology development (40%), however, remains a highly manual task requiring domain (anatomical) expertise and use of specialized software. The large overhead required for data preparation and ontology development contributed to a low annotation rate of approximately two characters per hour, compared with 14 characters per hour when activity was restricted to character annotation. Unlocking the potential of the vast stores of morphological

  14. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  15. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  16. Ultra-high resolution computed tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Paulus, Michael J. (Knoxville, TN); Sari-Sarraf, Hamed (Knoxville, TN); Tobin, Jr., Kenneth William (Harriman, TN); Gleason, Shaun S. (Knoxville, TN); Thomas, Jr., Clarence E. (Knoxville, TN)

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  17. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  18. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  19. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  20. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  1. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    and Networking' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth

  2. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  3. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  4. Computer-Aided Identification and Validation of Privacy Requirements

    Directory of Open Access Journals (Sweden)

    Rene Meis

    2016-05-01

    Full Text Available Privacy is a software quality that is closely related to security. The main difference is that security properties aim at the protection of assets that are crucial for the considered system, and privacy aims at the protection of personal data that are processed by the system. The identification of privacy protection needs in complex systems is a hard and error prone task. Stakeholders whose personal data are processed might be overlooked, or the sensitivity and the need of protection of the personal data might be underestimated. The later personal data and the needs to protect them are identified during the development process, the more expensive it is to fix these issues, because the needed changes of the system-to-be often affect many functionalities. In this paper, we present a systematic method to identify the privacy needs of a software system based on a set of functional requirements by extending the problem-based privacy analysis (ProPAn method. Our method is tool-supported and automated where possible to reduce the effort that has to be spent for the privacy analysis, which is especially important when considering complex systems. The contribution of this paper is a semi-automatic method to identify the relevant privacy requirements for a software-to-be based on its functional requirements. The considered privacy requirements address all dimensions of privacy that are relevant for software development. As our method is solely based on the functional requirements of the system to be, we enable users of our method to identify the privacy protection needs that have to be addressed by the software-to-be at an early stage of the development. As initial evaluation of our method, we show its applicability on a small electronic health system scenario.

  5. 32 CFR 310.52 - Computer matching publication and review requirements.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  6. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  7. Slovak High School Students' Attitudes toward Computers

    Science.gov (United States)

    Kubiatko, Milan; Halakova, Zuzana; Nagyova, Sona; Nagy, Tibor

    2011-01-01

    The pervasive involvement of information and communication technologies and computers in our daily lives influences changes of attitude toward computers. We focused on finding these ecological effects in the differences in computer attitudes as a function of gender and age. A questionnaire with 34 Likert-type items was used in our research. The…

  8. High speed and large scale scientific computing

    CERN Document Server

    Gentzsch, W; Joubert, GR

    2010-01-01

    Over the years parallel technologies have completely transformed main stream computing. This book deals with the issues related to the area of cloud computing and discusses developments in grids, applications and information processing, as well as e-science. It is suitable for computer scientists, IT engineers and IT managers.

  9. Scout: high-performance heterogeneous computing made simple

    Energy Technology Data Exchange (ETDEWEB)

    Jablin, James [Los Alamos National Laboratory; Mc Cormick, Patrick [Los Alamos National Laboratory; Herlihy, Maurice [BROWN UNIV.

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focus on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.

  10. Requirements for very high energy accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Richter, B.

    1985-04-01

    In this introductory paper at the second Workshop on Laser Acceleration my main goal is to set what I believe to be the energy and luminosity requirements of the machines of the future. These specifications are independent of the technique of accelerations. But, before getting to these technical questions, I will briefly review where we are in particle physics, for it is the large number of unanswered questions in physics that motivates the search for effective accelerators.

  11. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  12. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  13. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  14. Invariance in visual object recognition requires training: a computational argument

    Directory of Open Access Journals (Sweden)

    Robbe L. T Goris

    2010-05-01

    Full Text Available Visual object recognition is remarkably accurate and robust, yet its neurophysiological underpinnings are poorly understood. Single cells in brain regions thought to underlie object recognition code for many stimulus aspects, which poses a limit on their invariance. Combining the responses of multiple non-invariant neurons via weighted linear summation, i.e. population-coding, has been suggested to offer an optimal decoding strategy able to achieve invariant object recognition. However, because object identification is essentially parameter optimization in this model, the characteristics of the identification task trained to perform are critically important. If this task does not require invariance, a neural population-code is inherently more selective but less tolerant than the single-neurons constituting the population. Nevertheless, tolerance can be learned –provided that it is trained for–, at the cost of selectivity. We argue that this model is the appropriate null-hypothesis to compare behavioural results with and conclude that it may explain several experimental findings.

  15. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  16. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their

  17. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their ef

  18. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    Science.gov (United States)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  19. Opportunities and challenges of high-performance computing in chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Guest, M.F.; Kendall, R.A.; Nichols, J.A. [eds.] [and others

    1995-06-01

    The field of high-performance computing is developing at an extremely rapid pace. Massively parallel computers offering orders of magnitude increase in performance are under development by all the major computer vendors. Many sites now have production facilities that include massively parallel hardware. Molecular modeling methodologies (both quantum and classical) are also advancing at a brisk pace. The transition of molecular modeling software to a massively parallel computing environment offers many exciting opportunities, such as the accurate treatment of larger, more complex molecular systems in routine fashion, and a viable, cost-effective route to study physical, biological, and chemical `grand challenge` problems that are impractical on traditional vector supercomputers. This will have a broad effect on all areas of basic chemical science at academic research institutions and chemical, petroleum, and pharmaceutical industries in the United States, as well as chemical waste and environmental remediation processes. But, this transition also poses significant challenges: architectural issues (SIMD, MIMD, local memory, global memory, etc.) remain poorly understood and software development tools (compilers, debuggers, performance monitors, etc.) are not well developed. In addition, researchers that understand and wish to pursue the benefits offered by massively parallel computing are often hindered by lack of expertise, hardware, and/or information at their site. A conference and workshop organized to focus on these issues was held at the National Institute of Health, Bethesda, Maryland (February 1993). This report is the culmination of the organized workshop. The main conclusion: a drastic acceleration in the present rate of progress is required for the chemistry community to be positioned to exploit fully the emerging class of Teraflop computers, even allowing for the significant work to date by the community in developing software for parallel architectures.

  20. High-performance Scientific Computing using Parallel Computing to Improve Performance Optimization Problems

    Directory of Open Access Journals (Sweden)

    Florica Novăcescu

    2011-10-01

    Full Text Available HPC (High Performance Computing has become essential for the acceleration of innovation and the companies’ assistance in creating new inventions, better models and more reliable products as well as obtaining processes and services at low costs. The information in this paper focuses particularly on: description the field of high performance scientific computing, parallel computing, scientific computing, parallel computers, and trends in the HPC field, presented here reveal important new directions toward the realization of a high performance computational society. The practical part of the work is an example of use of the HPC tool to accelerate solving an electrostatic optimization problem using the Parallel Computing Toolbox that allows solving computational and data-intensive problems using MATLAB and Simulink on multicore and multiprocessor computers.

  1. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  2. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  3. High Fidelity Adiabatic Quantum Computation via Dynamical Decoupling

    CERN Document Server

    Quiroz, Gregory

    2012-01-01

    We introduce high-order dynamical decoupling strategies for open system adiabatic quantum computation. Our numerical results demonstrate that a judicious choice of high-order dynamical decoupling method, in conjunction with an encoding which allows computation to proceed alongside decoupling, can dramatically enhance the fidelity of adiabatic quantum computation in spite of decoherence.

  4. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    and Networking' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth

  5. High-Throughput Neuroimaging-Genetics Computational Infrastructure

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2014-04-01

    Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.

  6. High-throughput neuroimaging-genetics computational infrastructure.

    Science.gov (United States)

    Dinov, Ivo D; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D; Franco, Joseph; Toga, Arthur W

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  7. Next-generation sequencing: big data meets high performance computing.

    Science.gov (United States)

    Schmidt, Bertil; Hildebrandt, Andreas

    2017-02-02

    The progress of next-generation sequencing has a major impact on medical and genomic research. This high-throughput technology can now produce billions of short DNA or RNA fragments in excess of a few terabytes of data in a single run. This leads to massive datasets used by a wide range of applications including personalized cancer treatment and precision medicine. In addition to the hugely increased throughput, the cost of using high-throughput technologies has been dramatically decreasing. A low sequencing cost of around US$1000 per genome has now rendered large population-scale projects feasible. However, to make effective use of the produced data, the design of big data algorithms and their efficient implementation on modern high performance computing systems is required.

  8. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  9. II - Detector simulation for the LHC and beyond : how to match computing resources and physics requirements

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  10. I - Detector Simulation for the LHC and beyond: how to match computing resources and physics requirements

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  11. Analyzing high energy physics data using database computing: Preliminary report

    Science.gov (United States)

    Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry

    1991-01-01

    A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.

  12. Business Process Quality Computation: Computing Non-Functional Requirements to Improve Business Processes

    NARCIS (Netherlands)

    Heidari, F.

    2015-01-01

    Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis

  13. Business Process Quality Computation: Computing Non-Functional Requirements to Improve Business Processes

    NARCIS (Netherlands)

    Heidari, F.

    2015-01-01

    Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis

  14. Compact high performance spectrometers using computational imaging

    Science.gov (United States)

    Morton, Kenneth; Weisberg, Arel

    2016-05-01

    Compressive sensing technology can theoretically be used to develop low cost compact spectrometers with the performance of larger and more expensive systems. Indeed, compressive sensing for spectroscopic systems has been previously demonstrated using coded aperture techniques, wherein a mask is placed between the grating and a charge coupled device (CCD) and multiple measurements are collected with different masks. Although proven effective for some spectroscopic sensing paradigms (e.g. Raman), this approach requires that the signal being measured is static between shots (low noise and minimal signal fluctuation). Many spectroscopic techniques applicable to remote sensing are inherently noisy and thus coded aperture compressed sensing will likely not be effective. This work explores an alternative approach to compressed sensing that allows for reconstruction of a high resolution spectrum in sensing paradigms featuring significant signal fluctuations between measurements. This is accomplished through relatively minor changes to the spectrometer hardware together with custom super-resolution algorithms. Current results indicate that a potential overall reduction in CCD size of up to a factor of 4 can be attained without a loss of resolution. This reduction can result in significant improvements in cost, size, and weight of spectrometers incorporating the technology.

  15. 77 FR 50726 - Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in...

    Science.gov (United States)

    2012-08-22

    ... COMMISSION Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in... Digital Computer Software and Complex Electronics used in Safety Systems of Nuclear Power Plants.'' The DG... National Standards Institute and Institute of Electrical and Electronics Engineers (ANSI/IEEE) Standard...

  16. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  17. Achieving High Performance Distributed System: Using Grid, Cluster and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-02-01

    Full Text Available To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing

  18. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  19. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  20. Proceedings CSR 2010 Workshop on High Productivity Computations

    CERN Document Server

    Ablayev, Farid; Vasiliev, Alexander; 10.4204/EPTCS.52

    2011-01-01

    This volume contains the proceedings of the Workshop on High Productivity Computations (HPC 2010) which took place on June 21-22 in Kazan, Russia. This workshop was held as a satellite workshop of the 5th International Computer Science Symposium in Russia (CSR 2010). HPC 2010 was intended to organize the discussions about high productivity computing means and models, including but not limited to high performance and quantum information processing.

  1. Verifying cell loss requirements in high-speed communication networks

    Directory of Open Access Journals (Sweden)

    Kerry W. Fendick

    1998-01-01

    Full Text Available In high-speed communication networks it is common to have requirements of very small cell loss probabilities due to buffer overflow. Losses are measured to verify that the cell loss requirements are being met, but it is not clear how to interpret such measurements. We propose methods for determining whether or not cell loss requirements are being met. A key idea is to look at the stream of losses as successive clusters of losses. Often clusters of losses, rather than individual losses, should be regarded as the important “loss events”. Thus we propose modeling the cell loss process by a batch Poisson stochastic process. Successive clusters of losses are assumed to arrive according to a Poisson process. Within each cluster, cell losses do not occur at a single time, but the distance between losses within a cluster should be negligible compared to the distance between clusters. Thus, for the purpose of estimating the cell loss probability, we ignore the spaces between successive cell losses in a cluster of losses. Asymptotic theory suggests that the counting process of losses initiating clusters often should be approximately a Poisson process even though the cell arrival process is not nearly Poisson. The batch Poisson model is relatively easy to test statistically and fit; e.g., the batch-size distribution and the batch arrival rate can readily be estimated from cell loss data. Since batch (cluster sizes may be highly variable, it may be useful to focus on the number of batches instead of the number of cells in a measurement interval. We also propose a method for approximately determining the parameters of a special batch Poisson cell loss with geometric batch-size distribution from a queueing model of the buffer content. For this step, we use a reflected Brownian motion (RBM approximation of a G/D/1/C queueing model. We also use the RBM model to estimate the input burstiness given the cell loss rate. In addition, we use the RBM model to

  2. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  3. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  4. Intro - High Performance Computing for 2015 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Klitsner, Tom [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  5. A high-throughput bioinformatics distributed computing platform

    OpenAIRE

    Keane, Thomas M; Page, Andrew J.; McInerney, James O; Naughton, Thomas J.

    2005-01-01

    In the past number of years the demand for high performance computing has greatly increased in the area of bioinformatics. The huge increase in size of many genomic databases has meant that many common tasks in bioinformatics are not possible to complete in a reasonable amount of time on a single processor. Recently distributed computing has emerged as an inexpensive alternative to dedicated parallel computing. We have developed a general-purpose distributed computing platform ...

  6. High performance computation on beam dynamics problems in high intensity compact cyclotrons

    Institute of Scientific and Technical Information of China (English)

    ADELMANN; Andreas

    2011-01-01

    This paper presents the research progress in the beam dynamics problems for future high intensity compact cyclotrons by utilizing the state-of-the-art high performance computation technology. A "Start-to-Stop" model, which includes both the interaction of the internal particles of a single bunch and the mutual interaction of neighboring multiple bunches in the radial direction, is established for compact cyclotrons with multi-turn extraction. This model is then implemented in OPAL-CYCL, which is a 3D object-oriented parallel code for large scale particle simulations in cyclotrons. In addition, to meet the running requirement of parallel computation, we have constructed a small scale HPC cluster system and tested its performance. Finally, the high intensity beam dynamics problems in the 100 MeV compact cyclotron, which is being constructed at CIAE, are studied using this code and some conclusions are drawn.

  7. Short-term effects of implemented high intensity shoulder elevation during computer work

    Directory of Open Access Journals (Sweden)

    Madeleine Pascal

    2009-08-01

    pause with preceding high intensity contraction requires further investigation before high intensity shoulder elevations can be recommended as an integrated part of computer work.

  8. Progress and Challenges in High Performance Computer Technology

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Yong Dou; Qing-Feng Hu

    2006-01-01

    High performance computers provide strategic computing power in the construction of national economy and defense, and become one of symbols of the country's overall strength. Over 30 years, with the supports of governments, the technology of high performance computers is in the process of rapid development, during which the computing performance increases nearly 3 million times and the processors number expands over 10 hundred thousands times. To solve the critical issues related with parallel efficiency and scalability, scientific researchers pursued extensive theoretical studies and technical innovations. The paper briefly looks back the course of building high performance computer systems both at home and abroad,and summarizes the significant breakthroughs of international high performance computer technology. We also overview the technology progress of China in the area of parallel computer architecture, parallel operating system and resource management,parallel compiler and performance optimization, environment for parallel programming and network computing. Finally, we examine the challenging issues, "memory wall", system scalability and "power wall", and discuss the issues of high productivity computers, which is the trend in building next generation high performance computers.

  9. ASC Computational Environment (ACE) requirements version 8.0 final report.

    Energy Technology Data Exchange (ETDEWEB)

    Larzelere, Alex R. (Exagrid Engineering, Alexandria, VA); Sturtevant, Judith E.

    2006-11-01

    A decision was made early in the Tri-Lab Usage Model process, that the collection of the user requirements be separated from the document describing capabilities of the user environment. The purpose in developing the requirements as a separate document was to allow the requirements to take on a higher-level view of user requirements for ASC platforms in general. In other words, a separate ASC user requirement document could capture requirements in a way that was not focused on ''how'' the requirements would be fulfilled. The intent of doing this was to create a set of user requirements that were not linked to any particular computational platform. The idea was that user requirements would endure from one ASC platform user environment to another. The hope was that capturing the requirements in this way would assist in creating stable user environments even though the particular platforms would be evolving and changing. In order to clearly make the separation, the Tri-lab S&CS program decided to create a new title for the requirements. The user requirements became known as the ASC Computational Environment (ACE) Requirements.

  10. Towards robust dynamical decoupling and high fidelity adiabatic quantum computation

    Science.gov (United States)

    Quiroz, Gregory

    Quantum computation (QC) relies on the ability to implement high-fidelity quantum gate operations and successfully preserve quantum state coherence. One of the most challenging obstacles for reliable QC is overcoming the inevitable interaction between a quantum system and its environment. Unwanted interactions result in decoherence processes that cause quantum states to deviate from a desired evolution, consequently leading to computational errors and loss of coherence. Dynamical decoupling (DD) is one such method, which seeks to attenuate the effects of decoherence by applying strong and expeditious control pulses solely to the system. Provided the pulses are applied over a time duration sufficiently shorter than the correlation time associated with the environment dynamics, DD effectively averages out undesirable interactions and preserves quantum states with a low probability of error, or fidelity loss. In this study various aspects of this approach are studied from sequence construction to applications of DD to protecting QC. First, a comprehensive examination of the error suppression properties of a near-optimal DD approach is given to understand the relationship between error suppression capabilities and the number of required DD control pulses in the case of ideal, instantaneous pulses. While such considerations are instructive for examining DD efficiency, i.e., performance vs the number of control pulses, high-fidelity DD in realizable systems is difficult to achieve due to intrinsic pulse imperfections which further contribute to decoherence. As a second consideration, it is shown how one can overcome this hurdle and achieve robustness and recover high-fidelity DD in the presence of faulty control pulses using Genetic Algorithm optimization and sequence symmetrization. Thirdly, to illustrate the implementation of DD in conjunction with QC, the utilization of DD and quantum error correction codes (QECCs) as a protection method for adiabatic quantum

  11. Proceedings of the workshop on high resolution computed microtomography (CMT)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The purpose of the workshop was to determine the status of the field, to define instrumental and computational requirements, and to establish minimum specifications required by possible users. The most important message sent by implementers was the remainder that CMT is a tool. It solves a wide spectrum of scientific problems and is complementary to other microscopy techniques, with certain important advantages that the other methods do not have. High-resolution CMT can be used non-invasively and non-destructively to study a variety of hierarchical three-dimensional microstructures, which in turn control body function. X-ray computed microtomography can also be used at the frontiers of physics, in the study of granular systems, for example. With high-resolution CMT, for example, three-dimensional pore geometries and topologies of soils and rocks can be obtained readily and implemented directly in transport models. In turn, these geometries can be used to calculate fundamental physical properties, such as permeability and electrical conductivity, from first principles. Clearly, use of the high-resolution CMT technique will contribute tremendously to the advancement of current R and D technologies in the production, transport, storage, and utilization of oil and natural gas. It can also be applied to problems related to environmental pollution, particularly to spilling and seepage of hazardous chemicals into the Earth's subsurface. Applications to energy and environmental problems will be far-ranging and may soon extend to disciplines such as materials science--where the method can be used in the manufacture of porous ceramics, filament-resin composites, and microelectronics components--and to biomedicine, where it could be used to design biocompatible materials such as artificial bones, contact lenses, or medication-releasing implants. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  12. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  13. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.

    Energy Technology Data Exchange (ETDEWEB)

    FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

    2001-10-12

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

  14. An Analysis of Illinois High School Graduation Requirements.

    Science.gov (United States)

    Ferratier, Louis; Helmich, Edith

    On account of concern about declining achievement levels of high school graduates and proposed state legislation increasing graduation requirements to address this concern, this report analyzes current and proposed high school graduation requirements in Illinois, based on data compiled from local school documents, and compares the data to…

  15. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects....

  16. Matrix element method for high performance computing platforms

    Science.gov (United States)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  17. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  18. Resource Centered Computing delivering high parallel performance

    OpenAIRE

    2014-01-01

    International audience; Modern parallel programming requires a combination of differentparadigms, expertise and tuning, that correspond to the differentlevels in today's hierarchical architectures. To cope with theinherent difficulty, ORWL (ordered read-write locks) presents a newparadigm and toolbox centered around local or remote resources, suchas data, processors or accelerators. ORWL programmers describe theircomputation in terms of access to these resources during criticalsections. Exclu...

  19. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  20. Power/energy use cases for high performance computing.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  1. High Energy High Power Battery Exceeding PHEV40 Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Rempel, Jane [TIAX LLC, Lexington, MA (United States)

    2016-03-31

    TIAX has developed long-life lithium-ion cells that can meet and exceed the energy and power targets (200Wh/kg and 800W/kg pulse power) set out by DOE for PHEV40 batteries. To achieve these targets, we selected and scaled-up a high capacity version of our proprietary high energy and high power CAM-7® cathode material. We paired the cathode with a blended anode containing Si-based anode material capable of delivering high capacity and long life. Furthermore, we optimized the anode blend composition, cathode and anode electrode design, and selected binder and electrolyte compositions to achieve not only the best performance, but also long life. By implementing CAM-7 with a Si-based blended anode, we built and tested prototype 18650 cells that delivered measured specific energy of 198Wh/kg total energy and 845W/kg at 10% SOC (projected to 220Wh/kg in state-of-the-art 18650 cell hardware and 250Wh/kg in 15Ah pouch cells). These program demonstration cells achieved 90% capacity retention after 500 cycles in on-going cycle life testing. Moreover, we also tested the baseline CAM-7/graphite system in 18650 cells showing that 70% capacity retention can be achieved after ~4000 cycles (20 months of on-going testing). Ultimately, by simultaneously meeting the PHEV40 power and energy targets and providing long life, we have developed a Li-ion battery system that is smaller, lighter, and less expensive than current state-of-the-art Li-ion batteries.

  2. High Performance Spaceflight Computing (HPSC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In 2012, the NASA Game Changing Development Program (GCDP), residing in the NASA Space Technology Mission Directorate (STMD), commissioned a High Performance...

  3. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E [ORNL; Schuman, Catherine D [ORNL; Young, Steven R [ORNL; Patton, Robert M [ORNL; Spedalieri, Federico [University of Southern California, Information Sciences Institute; Liu, Jeremy [University of Southern California, Information Sciences Institute; Yao, Ke-Thia [University of Southern California, Information Sciences Institute; Rose, Garrett [University of Tennessee (UT); Chakma, Gangotree [University of Tennessee (UT)

    2016-01-01

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

  4. Requirements analysis and design for implementation of a satellite link for a local area computer network

    OpenAIRE

    Lorentzen, Richard B.

    1991-01-01

    Approved for public release; distribution is unlimited The purpose of this thesis is to provide naval computer students with a basic knowledge on Very Small Aperture Terminal (VSAT) satellite technology and to define the hardware and software requirements at the interface between a VSAT and a Local Area Network (LAN). By restricting a computer network to terrestrial links, a vast amount of knowledge is not accessed because either the terrestrial links can't access the information or the...

  5. A Framework for Evaluating Computer Architectures to Support Systems with Security Requirements, with Applications.

    Science.gov (United States)

    1987-11-05

    develops a set of criteria for evaluating computer architectures that are to support sy’stemns v% ith securit % requirements. Central to these criteria is the...M.. u Fu ’VMR Appendix B DEC VAX-11/780 OVERVIEW The VAX-I1/780 is a 32-bit computer with a virtual memory space of up to 4G -bytes IBI]. The

  6. CRPC research into linear algebra software for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.; Walker, D.W. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Dongarra, J.J. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Pozo, R. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science; Sorensen, D.C. [Rice Univ., Houston, TX (United States). Dept. of Computational and Applied Mathematics

    1994-12-31

    In this paper the authors look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for high-performance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high-performance computers. The authors focus on the design of the distributed-memory version of LAPACK, and on an object-oriented interface to LAPACK.

  7. Shafting Alignment Computing Method of New Multibearing Rotor System under Specific Installation Requirement

    Directory of Open Access Journals (Sweden)

    Qian Chen

    2016-01-01

    Full Text Available The shafting of large steam turbine generator set is composed of several rotors which are connected by couplings. The computing method of shafting with different structure under specific installation requirement is studied in this paper. Based on three-moment equation, shafting alignment mathematical model is established. The computing method of bearing elevations and loads under corresponding installation requirements, where bending moment of each coupling is zero and there exist preset sag and gap in some couplings, is proposed, respectively. Bearing elevations and loads of shafting with different structure under specific installation requirement are calculated; calculation results are compared with installation data measured on site which verifies the validity and accuracy of the proposed shafting alignment computing method. The above work provides a reliable approach to analyze shafting alignment and could guide installation on site.

  8. Profiles of Motivated Self-Regulation in College Computer Science Courses: Differences in Major versus Required Non-Major Courses

    Science.gov (United States)

    Shell, Duane F.; Soh, Leen-Kiat

    2013-12-01

    The goal of the present study was to utilize a profiling approach to understand differences in motivation and strategic self-regulation among post-secondary STEM students in major versus required non-major computer science courses. Participants were 233 students from required introductory computer science courses (194 men; 35 women; 4 unknown) at a large Midwestern state university. Cluster analysis identified five profiles: (1) a strategic profile of a highly motivated by-any-means good strategy user; (2) a knowledge-building profile of an intrinsically motivated autonomous, mastery-oriented student; (3) a surface learning profile of a utility motivated minimally engaged student; (4) an apathetic profile of an amotivational disengaged student; and (5) a learned helpless profile of a motivated but unable to effectively self-regulate student. Among CS majors and students in courses in their major field, the strategic and knowledge-building profiles were the most prevalent. Among non-CS majors and students in required non-major courses, the learned helpless, surface learning, and apathetic profiles were the most prevalent. Students in the strategic and knowledge-building profiles had significantly higher retention of computational thinking knowledge than students in other profiles. Students in the apathetic and surface learning profiles saw little instrumentality of the course for their future academic and career objectives. Findings show that students in STEM fields taking required computer science courses exhibit the same constellation of motivated strategic self-regulation profiles found in other post-secondary and K-12 settings.

  9. E-Learning Based on Cloud Computing: Requirements, Challenges and Solutions

    Directory of Open Access Journals (Sweden)

    Alireza Mohammadrezaei

    2014-07-01

    Full Text Available Cloud Computing Technology has changed the access method and development of applications. This technology by providing necessary fundamentals runs applications as services on the net via web browsers. E-learning can utilize cloud computing in order to fulfill the required infrastructures and also to provide the improved performance, scalability, and increased availability. This study in addition to representation of concepts such as e-learning and cloud computing by utilizing descriptive-analytical approach investigates usage of cloud computing in e-learning. Also by introducing the advantages indicates the significance and the necessity of using e-learning based upon cloud computing. Ultimately, the challenges of this model and their solutions have been represented as well

  10. A high performance scientific cloud computing environment for materials simulations

    CERN Document Server

    Jorissen, Kevin; Rehr, John J

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditi...

  11. The Principals and Practice of Distributed High Throughput Computing

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  12. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    Science.gov (United States)

    2013-09-01

    Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije Universiteit, Amsterdam, NL. (Invited Talk) [25] February...and middleware packages for polarizable force fields on multi-core and GPU systems, supported by the MapReduce paradigm. NSF MRI #0922657, $451,051...High-throughput Molecular Datasets for Scalable Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije

  13. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  14. Comparing computer experiments for fitting high-order polynomial metamodels

    OpenAIRE

    Johnson, Rachel T.; Montgomery, Douglas C.; Jones, Bradley; Parker, Peter T.

    2010-01-01

    The use of simulation as a modeling and analysis tool is wide spread. Simulation is an enabling tool for experimentally virtually on a validated computer environment. Often the underlying function for a computer experiment result has too much curvalture to be adequately modeled by a low-order polynomial. In such cases, finding an appropriate experimental design is not easy. We evaluate several computer experiments assuming the modeler is interested in fitting a high-order polynomial to th...

  15. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  16. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  17. Transforming High School Physics with Modeling and Computation

    CERN Document Server

    Aiken, John M

    2013-01-01

    The Engage to Excel (PCAST) report, the National Research Council's Framework for K-12 Science Education, and the Next Generation Science Standards all call for transforming the physics classroom into an environment that teaches students real scientific practices. This work describes the early stages of one such attempt to transform a high school physics classroom. Specifically, a series of model-building and computational modeling exercises were piloted in a ninth grade Physics First classroom. Student use of computation was assessed using a proctored programming assignment, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Student views on computation and its link to mechanics was assessed with a written essay and a series of think-aloud interviews. This pilot study shows computation's ability for connecting scientific practice to the high school science classroom.

  18. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  19. Computer software requirements specification for the world model light duty utility arm system

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.E.

    1996-02-01

    This Computer Software Requirements Specification defines the software requirements for the world model of the Light Duty Utility Arm (LDUA) System. It is intended to be used to guide the design of the application software, to be a basis for assessing the application software design, and to establish what is to be tested in the finished application software product. (This deploys end effectors into underground storage tanks by means of robotic arm on end of telescoping mast.)

  20. Domain Decomposition Based High Performance Parallel Computing

    CERN Document Server

    Raju, Mandhapati P

    2009-01-01

    The study deals with the parallelization of finite element based Navier-Stokes codes using domain decomposition and state-ofart sparse direct solvers. There has been significant improvement in the performance of sparse direct solvers. Parallel sparse direct solvers are not found to exhibit good scalability. Hence, the parallelization of sparse direct solvers is done using domain decomposition techniques. A highly efficient sparse direct solver PARDISO is used in this study. The scalability of both Newton and modified Newton algorithms are tested.

  1. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  2. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  3. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  4. Scientific and high-performance computing at FAIR

    Directory of Open Access Journals (Sweden)

    Kisel Ivan

    2015-01-01

    Full Text Available Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.

  5. 47 CFR 54.709 - Computations of required contributions to universal service support mechanisms.

    Science.gov (United States)

    2010-10-01

    ... universal service support mechanisms. 54.709 Section 54.709 Telecommunication FEDERAL COMMUNICATIONS... Computations of required contributions to universal service support mechanisms. (a) Prior to April 1, 2003, contributions to the universal service support mechanisms shall be based on contributors'...

  6. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  7. High Energy Physics Experiments In Grid Computing Networks

    Directory of Open Access Journals (Sweden)

    Andrzej Olszewski

    2008-01-01

    Full Text Available The demand for computing resources used for detector simulations and data analysis in HighEnergy Physics (HEP experiments is constantly increasing due to the development of studiesof rare physics processes in particle interactions. The latest generation of experiments at thenewly built LHC accelerator at CERN in Geneva is planning to use computing networks fortheir data processing needs. A Worldwide LHC Computing Grid (WLCG organization hasbeen created to develop a Grid with properties matching the needs of these experiments. Inthis paper we present the use of Grid computing by HEP experiments and describe activitiesat the participating computing centers with the case of Academic Computing Center, ACKCyfronet AGH, Kraków, Poland.

  8. GPU-based high-performance computing for radiation therapy.

    Science.gov (United States)

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  9. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  10. High performance computing network for cloud environment using simulators

    CERN Document Server

    Singh, N Ajith

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional data center or had to design a new application for the cloud computing environment. The security issue, latency, fault tolerance are some parameter which we need to keen care before deploying, all this we only know after deploying but by using simulation we can do the experiment before deploying it to real environment. By simulation we can understand the real environment of cloud computing and then after it successful result we can start deploying your application in cloud computing environment. By using the simulator it...

  11. High Performance Computing Assets for Ocean Acoustics Research

    Science.gov (United States)

    2016-11-18

    that make them easily parallelizable in the manner that, for example, atmospheric or ocean general circulation models (GCMs) are parallel. Many GCMs...Enclosed is the Final Report for ONR Grant No. NOOO 14-15-1-2840 entitled "High Performance Computing Assets for Ocean Acoustjc Research," Principal...distribution is unlimited. ONR DURIP Grant Final Report High Performance Computing Assets for Ocean Acoustics Research Timothy F. Dud a Applied Ocean

  12. Dynamic Resource Management and Job Scheduling for High Performance Computing

    OpenAIRE

    2016-01-01

    Job scheduling and resource management plays an essential role in high-performance computing. Supercomputing resources are usually managed by a batch system, which is responsible for the effective mapping of jobs onto resources (i.e., compute nodes). From the system perspective, a batch system must ensure high system utilization and throughput, while from the user perspective it must ensure fast response times and fairness when allocating resources across jobs. Parallel jobs can be divide...

  13. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  14. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Science.gov (United States)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  15. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  16. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration; Velikhov, Vasily; Konoplich, Rostislav

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of computer institutes' clusters, commercial computing resources and supercomputers. To perform precision measurements of the Higgs boson properties in these realities, it is also highly required to have effective instruments to simulate kinematic distributions of signal events. In this talk we give a brief description of the modern distribution reconstruction method called Morphing and perform few efficiency tests to demonstrate its potential. These studies have been performed on the WLCG and Kurchatov Institute’s Data Processing Center, including Tier-1 GRID site and supercomputer as well. We also analyze the CPU efficienc...

  17. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  18. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  19. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  20. Amenorrhea, ptosis and high insulin requirement in a young girl.

    Science.gov (United States)

    Hari Kumar, K V S; Kumar, Sandeep

    2016-01-01

    Lipodystrophy is an uncommon condition leading to excessive insulin requirement and menstrual abnormalities in young girls with diabetes. Neurological symptoms are uncommon in patients of generalized or partial lipodystrophy. We recently encountered a young girl, who presented with high insulin requirement, amenorrhea and neurological symptoms. Detailed evaluation led to the diagnosis of congenital lipodystrophy and we describe the same in this report. We also highlight the atypical features of the congenital lipodystrophy and the reasons for the excessive insulin requirement in patients with diabetes mellitus.

  1. NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures

    Directory of Open Access Journals (Sweden)

    Rivka Colen

    2014-10-01

    Full Text Available The National Cancer Institute (NCI Cancer Imaging Program organized two related workshops on June 26–27, 2013, entitled “Correlating Imaging Phenotypes with Genomics Signatures Research” and “Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems.” The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.

  2. Computer-aided design of control systems to meet many requirements

    Science.gov (United States)

    Schy, A. A.; Adams, W. M., Jr.; Johnson, K. G.

    1974-01-01

    A method is described for using nonlinear programing in the computer-aided design of airplane control systems. It is assumed that the quality of such systems depends on many criteria. These criteria are included in the constraints vector (instead of attempting to combine them into a single scalar criterion, as is usually done), and the design proceeds through a sequence of nonlinear programing solutions in which the designer varies the specification of sets of requirements levels. The method is applied to design of a lateral stability augmentation system (SAS) for a fighter airplane, in which the requirements vector is chosen from the official handling qualities specifications. Results are shown for several simple SAS configurations designed to obtain desirable handling qualities over all design flight conditions with minimum feedback gains. The choice of the final design for each case is not unique but depends on the designer's decision as to which achievable set of requirements levels represents the best for that system. Results indicate that it may be possible to design constant parameter SAS which can satisfy the most stringent handling qualities requirements for fighter airplanes in all flight conditions. The role of the designer as a decision maker, interacting with the computer program, is discussed. Advantages of this type of designer-computer interaction are emphasized. Desirable extensions of the method are indicated.

  3. A first attempt to bring computational biology into advanced high school biology classrooms.

    Directory of Open Access Journals (Sweden)

    Suzanne Renick Gallagher

    2011-10-01

    Full Text Available Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  4. A first attempt to bring computational biology into advanced high school biology classrooms.

    Science.gov (United States)

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  5. A review of High Performance Computing foundations for scientists

    CERN Document Server

    Ibáñez, Pablo García-Risueño Pablo E

    2012-01-01

    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and di...

  6. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  7. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  8. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    Science.gov (United States)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  9. Requirements for fault-tolerant factoring on an atom-optics quantum computer.

    Science.gov (United States)

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2013-01-01

    Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.

  10. Computer Literacy and the Construct Validity of a High-Stakes Computer-Based Writing Assessment

    Science.gov (United States)

    Jin, Yan; Yan, Ming

    2017-01-01

    One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…

  11. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    Science.gov (United States)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  12. The Role of Computing in High-Energy Physics.

    Science.gov (United States)

    Metcalf, Michael

    1983-01-01

    Examines present and future applications of computers in high-energy physics. Areas considered include high-energy physics laboratories, accelerators, detectors, networking, off-line analysis, software guidelines, event sizes and volumes, graphics applications, event simulation, theoretical studies, and future trends. (JN)

  13. Proceedings from the conference on high speed computing: High speed computing and national security

    Energy Technology Data Exchange (ETDEWEB)

    Hirons, K.P.; Vigil, M.; Carlson, R. [comps.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  14. HIGH RESOLUTION RESISTIVITY LEAK DETECTION DATA PROCESSING & EVALUATION MEHTODS & REQUIREMENTS

    Energy Technology Data Exchange (ETDEWEB)

    SCHOFIELD JS

    2007-10-04

    This document has two purposes: {sm_bullet} Describe how data generated by High Resolution REsistivity (HRR) leak detection (LD) systems deployed during single-shell tank (SST) waste retrieval operations are processed and evaluated. {sm_bullet} Provide the basic review requirements for HRR data when Hrr is deployed as a leak detection method during SST waste retrievals.

  15. High-Precision Floating-Point Arithmetic in ScientificComputation

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2004-12-31

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required: some of these applications require roughly twice this level; others require four times; while still others require hundreds or more digits to obtain numerically meaningful results. Such calculations have been facilitated by new high-precision software packages that include high-level language translation modules to minimize the conversion effort. These activities have yielded a number of interesting new scientific results in fields as diverse as quantum theory, climate modeling and experimental mathematics, a few of which are described in this article. Such developments suggest that in the future, the numeric precision used for a scientific computation may be as important to the program design as are the algorithms and data structures.

  16. Nonlinear dynamics of high-power ultrashort laser pulses: exaflop computations on a laboratory computer station and subcycle light bullets

    Science.gov (United States)

    Voronin, A. A.; Zheltikov, A. M.

    2016-09-01

    The propagation of high-power ultrashort light pulses involves intricate nonlinear spatio-temporal dynamics where various spectral-temporal field transformation effects are strongly coupled to the beam dynamics, which, in turn, varies from the leading to the trailing edge of the pulse. Analysis of this nonlinear dynamics, accompanied by spatial instabilities, beam breakup into multiple filaments, and unique phenomena leading to the generation of extremely short optical field waveforms, is equivalent in its computational complexity to a simulation of the time evolution of a few billion-dimensional physical system. Such an analysis requires exaflops of computational operations and is usually performed on high-performance supercomputers. Here, we present methods of physical modeling and numerical analysis that allow problems of this class to be solved on a laboratory computer boosted by a cluster of graphic accelerators. Exaflop computations performed with the application of these methods reveal new unique phenomena in the spatio-temporal dynamics of high-power ultrashort laser pulses. We demonstrate that unprecedentedly short light bullets can be generated as a part of that dynamics, providing optical field localization in both space and time through a delicate balance between dispersion and nonlinearity with simultaneous suppression of diffraction-induced beam divergence due to the joint effect of Kerr and ionization nonlinearities.

  17. High-speed linear optics quantum computing using active feed-forward.

    Science.gov (United States)

    Prevedel, Robert; Walther, Philip; Tiefenbacher, Felix; Böhi, Pascal; Kaltenbaek, Rainer; Jennewein, Thomas; Zeilinger, Anton

    2007-01-04

    As information carriers in quantum computing, photonic qubits have the advantage of undergoing negligible decoherence. However, the absence of any significant photon-photon interaction is problematic for the realization of non-trivial two-qubit gates. One solution is to introduce an effective nonlinearity by measurements resulting in probabilistic gate operations. In one-way quantum computation, the random quantum measurement error can be overcome by applying a feed-forward technique, such that the future measurement basis depends on earlier measurement results. This technique is crucial for achieving deterministic quantum computation once a cluster state (the highly entangled multiparticle state on which one-way quantum computation is based) is prepared. Here we realize a concatenated scheme of measurement and active feed-forward in a one-way quantum computing experiment. We demonstrate that, for a perfect cluster state and no photon loss, our quantum computation scheme would operate with good fidelity and that our feed-forward components function with very high speed and low error for detected photons. With present technology, the individual computational step (in our case the individual feed-forward cycle) can be operated in less than 150 ns using electro-optical modulators. This is an important result for the future development of one-way quantum computers, whose large-scale implementation will depend on advances in the production and detection of the required highly entangled cluster states.

  18. Big Data and High-Performance Computing in Global Seismology

    Science.gov (United States)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data

  19. Computer Security: SAHARA - Security As High As Reasonably Achievable

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    History has shown us time and again that our computer systems, computing services and control systems have digital security deficiencies. Too often we deploy stop-gap solutions and improvised hacks, or we just accept that it is too late to change things.    In my opinion, this blatantly contradicts the professionalism we show in our daily work. Other priorities and time pressure force us to ignore security or to consider it too late to do anything… but we can do better. Just look at how “safety” is dealt with at CERN! “ALARA” (As Low As Reasonably Achievable) is the objective set by the CERN HSE group when considering our individual radiological exposure. Following this paradigm, and shifting it from CERN safety to CERN computer security, would give us “SAHARA”: “Security As High As Reasonably Achievable”. In other words, all possible computer security measures must be applied, so long as ...

  20. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  1. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  2. 10 CFR 727.5 - What acknowledgment and consent is required for access to information on DOE computers?

    Science.gov (United States)

    2010-01-01

    ... information on DOE computers? 727.5 Section 727.5 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.5 What acknowledgment and consent is required for access to information on DOE computers? An individual may not be granted access to information on a DOE...

  3. Data Mining Techniques for Identifying Students at Risk of Failing a Computer Proficiency Test Required for Graduation

    Science.gov (United States)

    Tsai, Chih-Fong; Tsai, Ching-Tzu; Hung, Chia-Sheng; Hwang, Po-Sen

    2011-01-01

    Enabling undergraduate students to develop basic computing skills is an important issue in higher education. As a result, some universities have developed computer proficiency tests, which aim to assess students' computer literacy. Generally, students are required to pass such tests in order to prove that they have a certain level of computer…

  4. Nutritional and fluid requirements: high-output stomas.

    Science.gov (United States)

    Medlin, Sophie

    Based on the current available evidence, this article explores the nutritional management of those with a high-output stoma. The main alterations required to the intake of patients with a high-output stoma include the use of an oral rehydration solution to ensure optimum absorption of fluid and sodium, and a high-calorie, high-protein diet, with the aim of optimizing nutritional status. Diet advice should be delivered by a dietitian with experience in managing these complex patients. Monitoring of electrolytes and micronutrients is essential, and long-term follow up from a multidisciplinary nutrition support team is invaluable in coordinating this. Patients with high-output stomas can enjoy good quality of life and long-term health if their condition is managed effectively by a well-organized multidisciplinary team.

  5. Challenges of high dam construction to computational mechanics

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chuhan

    2007-01-01

    The current situations and growing prospects of China's hydro-power development and high dam construction are reviewed,giving emphasis to key issues for safety evaluation of large dams and hydro-power plants,especially those associated with application of state-of-the-art computational mechanics.These include but are not limited to:stress and stability analysis of dam foundations under external loads;earthquake behavior of dam-foundation-reservoir systems,mechanical properties of mass concrete for dams,high velocity flow and energy dissipation for high dams,scientific and technical problems of hydro-power plants and underground structures,and newly developed types of dam-Roll Compacted Concrete (RCC) dams and Concrete Face Rock-fill (CFR)dams.Some examples demonstrating successful utilizations of computational mechanics in high dam engineering are given,including seismic nonlinear analysis for arch dam foundations,nonlinear fracture analysis of arch dams under reservoir loads,and failure analysis of arch dam-foundations.To make more use of the computational mechanics in high dam engineering,it is pointed out that much research including different computational methods,numerical models and solution schemes,and verifications through experimental tests and filed measurements is necessary in the future.

  6. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  7. Profiles of Motivated Self-Regulation in College Computer Science Courses: Differences in Major versus Required Non-Major Courses

    Science.gov (United States)

    Shell, Duane F.; Soh, Leen-Kiat

    2013-01-01

    The goal of the present study was to utilize a profiling approach to understand differences in motivation and strategic self-regulation among post-secondary STEM students in major versus required non-major computer science courses. Participants were 233 students from required introductory computer science courses (194 men; 35 women; 4 unknown) at…

  8. 40 CFR 270.215 - How are time periods in the requirements in this subpart and my RAP computed?

    Science.gov (United States)

    2010-07-01

    ... requirements in this subpart and my RAP computed? 270.215 Section 270.215 Protection of Environment... HAZARDOUS WASTE PERMIT PROGRAM Remedial Action Plans (RAPs) Operating Under Your Rap § 270.215 How are time periods in the requirements in this subpart and my RAP computed? (a) Any time period scheduled to begin on...

  9. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  10. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  11. ABOUT THE SUITABILITY OF CLOUDS IN HIGH-PERFORMANCE COMPUTING

    Directory of Open Access Journals (Sweden)

    Harald Richter

    2016-01-01

    Full Text Available Cloud computing has become the ubiquitous computing and storage paradigm. It is also attractive for scientists, because they do not have to care any more for their own IT infrastructure, but can outsource it to a Cloud Service Provider of their choice. However, for the case of High-Performance Computing (HPC in a cloud, as it is needed in simulations or for Big Data analysis, things are getting more intricate, because HPC codes must stay highly efficient, even when executed by many virtual cores (vCPUs. Older clouds or new standard clouds can fulfil this only under special precautions, which are given in this article. The results can be extrapolated to other cloud OSes than OpenStack and to other codes than OpenFOAM, which were used as examples.

  12. An Enhanced Tree-Shaped Adachi-Like Chaotic Neural Network Requiring Linear-Time Computations

    Science.gov (United States)

    Qin, Ke; Oommen, B. John

    The Adachi Neural Network (AdNN) [1-5], is a fascinating Neural Network (NN) which has been shown to possess chaotic properties, and to also demonstrate Associative Memory (AM) and Pattern Recognition (PR) characteristics. Variants of the AdNN [6,7] have also been used to obtain other PR phenomena, and even blurring. A significant problem associated with the AdNN and its variants, is that all of them require a quadratic number of computations. This is essentially because all their NNs are completely connected graphs. In this paper we consider how the computations can be significantly reduced by merely using a linear number of computations. To do this, we extract from the original complete graph, one of its spanning trees. We then compute the weights for this spanning tree in such a manner that the modified tree-based NN has approximately the same input-output characteristics, and thus the new weights are themselves calculated using a gradient-based algorithm. By a detailed experimental analysis, we show that the new linear-time AdNN-like network possesses chaotic and PR properties for different settings. As far as we know, such a tree-based AdNN has not been reported, and the results given here are novel.

  13. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    Science.gov (United States)

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  14. Developing a High Performance Software Library with MPI and CUDA for Matrix Computations

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2014-04-01

    Full Text Available Nowadays, the paradigm of parallel computing is changing. CUDA is now a popular programming model for general purpose computations on GPUs and a great number of applications were ported to CUDA obtaining speedups of orders of magnitude comparing to optimized CPU implementations. Hybrid approaches that combine the message passing model with the shared memory model for parallel computing are a solution for very large applications. We considered a heterogeneous cluster that combines the CPU and GPU computations using MPI and CUDA for developing a high performance linear algebra library. Our library deals with large linear systems solvers because they are a common problem in the fields of science and engineering. Direct methods for computing the solution of such systems can be very expensive due to high memory requirements and computational cost. An efficient alternative are iterative methods which computes only an approximation of the solution. In this paper we present an implementation of a library that uses a hybrid model of computation using MPI and CUDA implementing both direct and iterative linear systems solvers. Our library implements LU and Cholesky factorization based solvers and some of the non-stationary iterative methods using the MPI/CUDA combination. We compared the performance of our MPI/CUDA implementation with classic programs written to be run on a single CPU.

  15. Exploring the structural requirements for jasmonates and related compounds as novel plant growth regulators: a current computational perspective.

    Science.gov (United States)

    Chen, Ke-Xian; Li, Zu-Guang

    2009-11-01

    Jasmonates and related compounds have been highlighted recently in the field of plant physiology and plant molecular biology due to their significant regulatory roles in the signaling pathway for the diverse aspects of plant development and survival. Though a considerable amount of studies concerning their biological effects in different plants have been widely reported, the molecular details of the signaling mechanism are still poorly understood. This review sheds new light on the structural requirements for the bioactivity/property of jasmonic acid derivatives in current computational perspective, which differs from previous research that mainly focus on their biological evaluation, gene and metabolic regulation and the enzymes in their biosynthesis. The computational results may contribute to further understanding the mechanism of drug-receptor interactions in their signaling pathway and designing novel plant growth regulators as high effective ecological pesticides.

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  17. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    Science.gov (United States)

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  18. Understanding Computer Forensics Requirements in China Via The “Panda Burning Incense” Virus Case

    Directory of Open Access Journals (Sweden)

    K P Chow

    2014-09-01

    Full Text Available In March 2012, Mainland China has amended its Criminal Procedure Law, which includes the introduction of a new type of evidence, i.e., digital evidence, to the court of law. To better understand the development of computer forensics and digital evidence in Mainland China, this paper discusses the Chinese legal system in relation to digital investigation and how the current legal requirements affect the existing legal and technical usage of digital evidence at legal proceedings. Through studying the famous “Panda Burning Incense (Worm.WhBoy.cw” virus case that happened in 2007, this paper aims to provide a better understanding of how to properly conduct computer forensics examination and present digital evidence at court of law in Mainland China.

  19. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  20. High Performance Computing tools for the Integrated Tokamak Modelling project

    Energy Technology Data Exchange (ETDEWEB)

    Guillerminet, B., E-mail: bernard.guillerminet@cea.f [Association Euratom-CEA sur la Fusion, IRFM, DSM, CEA Cadarache (France); Plasencia, I. Campos [Instituto de Fisica de Cantabria (IFCA), CSIC, Santander (Spain); Haefele, M. [Universite Louis Pasteur, Strasbourg (France); Iannone, F. [EURATOM/ENEA Fusion Association, Frascati (Italy); Jackson, A. [University of Edinburgh (EPCC) (United Kingdom); Manduchi, G. [EURATOM/ENEA Fusion Association, Padova (Italy); Plociennik, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland); Sonnendrucker, E. [Universite Louis Pasteur, Strasbourg (France); Strand, P. [Chalmers University of Technology (Sweden); Owsiak, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland)

    2010-07-15

    Fusion Modelling and Simulation are very challenging and the High Performance Computing issues are addressed here. Toolset for jobs launching and scheduling, data communication and visualization have been developed by the EUFORIA project and used with a plasma edge simulation code.

  1. Artificial Intelligence and the High School Computer Curriculum.

    Science.gov (United States)

    Dillon, Richard W.

    1993-01-01

    Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…

  2. Seeking Solution: High-Performance Computing for Science. Background Paper.

    Science.gov (United States)

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for…

  3. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  4. Replica-Based High-Performance Tuple Space Computing

    DEFF Research Database (Denmark)

    Andric, Marina; De Nicola, Rocco; Lluch Lafuente, Alberto

    2015-01-01

    We present the tuple-based coordination language RepliKlaim, which enriches Klaim with primitives for replica-aware coordination. Our overall goal is to offer suitable solutions to the challenging problems of data distribution and locality in large-scale high performance computing. In particular,...

  5. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  6. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  7. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  8. Development of utility generic functional requirements for electronic work packages and computer-based procedures

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-01

    The Nuclear Electronic Work Packages - Enterprise Requirements (NEWPER) initiative is a step toward a vision of implementing an eWP framework that includes many types of eWPs. This will enable immediate paper-related cost savings in work management and provide a path to future labor efficiency gains through enhanced integration and process improvement in support of the Nuclear Promise (Nuclear Energy Institute 2016). The NEWPER initiative was organized by the Nuclear Information Technology Strategic Leadership (NITSL) group, which is an organization that brings together leaders from the nuclear utility industry and regulatory agencies to address issues involved with information technology used in nuclear-power utilities. NITSL strives to maintain awareness of industry information technology-related initiatives and events and communicates those events to its membership. NITSL and LWRS Program researchers have been coordinating activities, including joint organization of NEWPER-related meetings and report development. The main goal of the NEWPER initiative was to develop a set of utility generic functional requirements for eWP systems. This set of requirements will support each utility in their process of identifying plant-specific functional and non-functional requirements. The NEWPER initiative has 140 members where the largest group of members consists of 19 commercial U.S. nuclear utilities and eleven of the most prominent vendors of eWP solutions. Through the NEWPER initiative two sets of functional requirements were developed; functional requirements for electronic work packages and functional requirements for computer-based procedures. This paper will describe the development process as well as a summary of the requirements.

  9. Lattice Boltzmann Method used for the aircraft characteristics computation at high angle of attack

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Traditional Finite Volume Method(FVM)and Lattice Boltzmann Method(LBM)are both used to compute the high angle attack aerodynamic characteristics of the benchmark aircraft model named CT-1.Even though the software requires flow on the order of Ma<0.4,simulation at Ma=0.5 is run in PowerFLOW after theoretical analysis.The consistency with the wind tunnel testing is satisfied,especially for the LBM which can produce perfect results at high angle attack.PowerFLOW can accurately capture the detail of flows because it is inherently time-dependent and parallel and suits large-scale computation very well.

  10. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  11. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, A B; de Supinski, B; Mueller, F; Mckee, S A

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even more complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.

  12. High resolution computed tomography for peripheral facial nerve paralysis

    Energy Technology Data Exchange (ETDEWEB)

    Koester, O.; Straehler-Pohl, H.J.

    1987-01-01

    High resolution computer tomographic examinations of the petrous bones were performed on 19 patients with confirmed peripheral facial nerve paralysis. High resolution CT provides accurate information regarding the extent, and usually regarding the type, of pathological process; this can be accurately localised with a view to possible surgical treatments. The examination also differentiates this from idiopathic paresis, which showed no radiological changes. Destruction of the petrous bone, without facial nerve symptoms, makes early suitable treatment mandatory.

  13. Component-based software for high-performance scientific computing

    Science.gov (United States)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  14. Computational Methodology for the Prediction of Functional Requirement Variations Across the Product Life-Cycle

    CERN Document Server

    Mandil, Guillaume; Rivière, Alain

    2009-01-01

    The great majority of engineered products are subject to thermo-mechanical loads which vary with the product environment during the various phases of its life-cycle (machining, assembly, intended service use...). Those load variations may result in different values of the parts nominal dimensions, which in turn generate corresponding variation of the effective clearance (functional requirement) in the assembly. Usually, and according to the contractual drawings, the parts are measured after the machining stage, whereas the interesting measurement values are the ones taken in service for they allow the prediction of clearance value under operating conditions. Unfortunately, measurement in operating conditions may not be practical to obtain. Hence, the main purpose of this research is to create, through computations and simulations, links between the values of the loads, dimensions and functional requirements during the successive phases of the life cycle of some given product. [...

  15. Virtual environment and computer-aided technologies used for system prototyping and requirements development

    Science.gov (United States)

    Logan, Cory; Maida, James; Goldsby, Michael; Clark, Jim; Wu, Liew; Prenger, Henk

    1993-01-01

    The Space Station Freedom (SSF) Data Management System (DMS) consists of distributed hardware and software which monitor and control the many onboard systems. Virtual environment and off-the-shelf computer technologies can be used at critical points in project development to aid in objectives and requirements development. Geometric models (images) coupled with off-the-shelf hardware and software technologies were used in The Space Station Mockup and Trainer Facility (SSMTF) Crew Operational Assessment Project. Rapid prototyping is shown to be a valuable tool for operational procedure and system hardware and software requirements development. The project objectives, hardware and software technologies used, data gained, current activities, future development and training objectives shall be discussed. The importance of defining prototyping objectives and staying focused while maintaining schedules are discussed along with project pitfalls.

  16. High Performance Computing for Dsm Extraction from ZY-3 Tri-Stereo Imagery

    Science.gov (United States)

    Lu, Shuning; Huang, Shicun; Pan, Zhiqiang; Deng, Huawu; Stanley, David; Xin, Yubin

    2016-06-01

    ZY-3 has been acquiring high quality imagery since its launch in 2012 and its tri-stereo (three-view or three-line-array) imagery has become one of the top choices for extracting DSM (Digital Surface Model) products in China over the past few years. The ZY-3 tri-stereo sensors offer users the ability to capture imagery over large regions including an entire territory of a country, such as China, resulting in a large volume of ZY-3 tri-stereo scenes which require timely (e.g., near real time) processing, something that is not currently possible using traditional photogrammetry workstations. This paper presents a high performance computing solution which can efficiently and automatically extract DSM products from ZY-3 tri-stereo imagery. The high performance computing solution leverages certain parallel computing technologies to accelerate computation within an individual scene and then deploys a distributed computing technology to increase the overall data throughput in a robust and efficient manner. By taking advantage of the inherent efficiencies within the high performance computing environment, the DSM extraction process can exploit all combinations offered from a set of tri-stereo images (forward-backword, forward-nadir and backword-nadir). The DSM results merged from all of the potential combinations can minimize blunders (e.g., incorrect matches) and also offer the ability to remove potential occlusions which may exist in a single stereo pair, resulting in improved accuracy and quality versus those that are not merged. Accelerated performance is inherent within each of the individual steps of the DSM extraction workflow, including the collection of ground control points and tie points, image bundle adjustment, the creation of epipolar images, and computing elevations. Preliminary experiments over a large area in China have proven that the high performance computing system can generate high quality and accurate DSM products in a rapid manner.

  17. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  18. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2013-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures will be delivered over the 5 days of the School. A Poster Session will be held, at which students are welcome to present their research topics.

  19. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  20. Parallel computation of seismic analysis of high arch dam

    Institute of Scientific and Technical Information of China (English)

    Chen Houqun; Ma Huaifa; Tu Jin; Cheng Guangqing; Tang Juzhen

    2008-01-01

    Parallel computation programs are developed for three-dimensional meso-mechanics analysis of fully-graded dam concrete and seismic response analysis of high arch dams (ADs), based on the Parallel Finite Element Program Generator (PFEPG). The computational algorithms of the numerical simulation of the meso-structure of concrete specimens were studied. Taking into account damage evolution, static preload, strain rate effect, and the heterogeneity of the meso-structure of dam concrete, the fracture processes of damage evolution and configuration of the cracks can be directly simulated. In the seismic response analysis of ADs, all the following factors are involved, such as the nonlinear contact due to the opening and slipping of the contraction joints, energy dispersion of the far-field foundation, dynamic interactions of the dam-foundation-reservoir system, and the combining effects of seismic action with all static loads. The correctness, reliability and efficiency of the two parallel computational programs are verified with practical illustrations.

  1. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  2. Computer-Based Drill and Practice in Arithmetic: Widening the Gap between High- and Low-Achieving Students.

    Science.gov (United States)

    Hativa, Nira

    1988-01-01

    The differential effects of computer-assisted instruction for high-achieving and low-achieving students were examined for seven elementary students of varied background. Higher-achieving students were more able to adjust to the requirements of computer work and to derive benefit from it than were lower-achieving students. Implications for teaching…

  3. DOE Greenbook - Needs and Directions in High-Performance Computing for the Office of Science

    Energy Technology Data Exchange (ETDEWEB)

    Rotman, D; Harding, P

    2002-04-01

    The NERSC Users Group (NUG) encompasses all investigators utilizing the NERSC computational and storage resources of the Department of Energy Office of Science facility. At the February 2001 meeting held at the National Energy Research Scientific Computing (NERSC) facility, the NUG executive committee (NUGEX) began the process to assess the role of computational science and determine the computational needs in future Office of Science (OS) programs. The continuing rapid development of the computational science fields and computer technology (both hardware and software) suggest frequent periodic review of user requirements and the role that computational science should play in meeting OS program commitments. Over the last decade, NERSC (and many other supercomputer centers) have transitioned from a center based on vector supercomputers to one almost entirely dedicated to massively parallel platforms (MPPs). Users have had to learn and transform their application codes to make use of these parallel computers. NERSC computer time requests suggest that a vast majority of NERSC users have accomplished this transition and are ready for production parallel computing. Tools for debugging, mathematical toolsets, and robust communication software have enabled this transition. The large memory and CPU power of these parallel machines are allowing simulations at resolutions, timescales, and levels of realism in physics that were never before possible. Difficulties and performance issues in using MPP systems remain linked to the access of non-uniform memory: cache, local, and remote memory. This issue includes both the speed of access and the methods of access to the memory architecture. Optimized mathematical tools to perform standard functions on parallel machines are available. Users should be encouraged to make heavy use of those tools to enhance productivity and system performance. There are at least four underlying components to the computational resources used by OS

  4. The design of linear algebra libraries for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States); Walker, D.W. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  5. High performance stream computing for particle beam transport simulations

    Energy Technology Data Exchange (ETDEWEB)

    Appleby, R; Bailey, D; Higham, J; Salt, M [School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom)], E-mail: Robert.Appleby@manchester.ac.uk, E-mail: David.Bailey-2@manchester.ac.uk

    2008-07-15

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed.

  6. High performance stream computing for particle beam transport simulations

    Science.gov (United States)

    Appleby, R.; Bailey, D.; Higham, J.; Salt, M.

    2008-07-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed.

  7. Computational quantum chemistry for single Heisenberg spin couplings made simple: just one spin flip required.

    Science.gov (United States)

    Mayhall, Nicholas J; Head-Gordon, Martin

    2014-10-07

    We highlight a simple strategy for computing the magnetic coupling constants, J, for a complex containing two multiradical centers. On the assumption that the system follows Heisenberg Hamiltonian physics, J is obtained from a spin-flip electronic structure calculation where only a single electron is excited (and spin-flipped), from the single reference with maximum Ŝz, M, to the M - 1 manifold, regardless of the number of unpaired electrons, 2M, on the radical centers. In an active space picture involving 2M orbitals, only one β electron is required, together with only one α hole. While this observation is extremely simple, the reduction in the number of essential configurations from exponential in M to only linear provides dramatic computational benefits. This (M, M - 1) strategy for evaluating J is an unambiguous, spin-pure, wave function theory counterpart of the various projected broken symmetry density functional theory schemes, and likewise gives explicit energies for each possible spin-state that enable evaluation of properties. The approach is illustrated on five complexes with varying numbers of unpaired electrons, for which one spin-flip calculations are used to compute J. Some implications for further development of spin-flip methods are discussed.

  8. Computational quantum chemistry for single Heisenberg spin couplings made simple: Just one spin flip required

    Energy Technology Data Exchange (ETDEWEB)

    Mayhall, Nicholas J.; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu [Kenneth S. Pitzer Center for Theoretical Chemistry, Department of Chemistry, University of California, Berkeley, California 94720, USA and Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States)

    2014-10-07

    We highlight a simple strategy for computing the magnetic coupling constants, J, for a complex containing two multiradical centers. On the assumption that the system follows Heisenberg Hamiltonian physics, J is obtained from a spin-flip electronic structure calculation where only a single electron is excited (and spin-flipped), from the single reference with maximum S{sup ^}{sub z}, M, to the M − 1 manifold, regardless of the number of unpaired electrons, 2M, on the radical centers. In an active space picture involving 2M orbitals, only one β electron is required, together with only one α hole. While this observation is extremely simple, the reduction in the number of essential configurations from exponential in M to only linear provides dramatic computational benefits. This (M, M − 1) strategy for evaluating J is an unambiguous, spin-pure, wave function theory counterpart of the various projected broken symmetry density functional theory schemes, and likewise gives explicit energies for each possible spin-state that enable evaluation of properties. The approach is illustrated on five complexes with varying numbers of unpaired electrons, for which one spin-flip calculations are used to compute J. Some implications for further development of spin-flip methods are discussed.

  9. Fault Tolerance and COTS: Next Generation of High Performance Satellite Computers

    Science.gov (United States)

    Behr, P.; Bärwald, W.; Brieß, K.; Montenegro, S.

    The increasing complexity of future satellite missions requires adequately powerful on- board computer systems. The obvious performance gap between state-of-the-art micro- processor technology ("commercial-off-the-shelf", COTS) and available radiation hard components already impedes the realization of innovative satellite applications requiring high performance on-board data processing. In the paper we emphasize the advantages of the COTS approach for future OBCS and we show why we are convinced that this approach is feasible. We present the architecture of the fault tolerant control computer of the BIRD satellite and finally we show some results of the BIRD mission after 20 months in orbit, especially the experience with its COTS based control computer.

  10. Krait bite requiring high dose antivenom: a case report.

    Science.gov (United States)

    Sharma, Sanjib Kumar; Koirala, Shekhar; Dahal, Gaheraj

    2002-03-01

    Anti snake venom (ASV) is the most specific therapy available for treatment of snakebite envenomation. The ASV available in Nepal are polyvalent ASV produced in India and are effective against envenomation by cobra and krait, the two most common species found in Eastern Nepal. Neurotoxic signs respond slowly and unconvincingly and continuous absorption of venom may cause recurrent neurotoxicity. Therefore, close observation and continuous administration of ASV is essential to save the victim. We report a case of neurotoxic envenomation due to bite by common krait (Bangarus caeruleus). The victim required very high dose of polyvalent ASV for reversal of neurological manifestations.

  11. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  12. Energy-efficient high performance computing measurement and tuning

    CERN Document Server

    III, James H Laros; Kelly, Sue

    2012-01-01

    In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nod

  13. Precision cosmology with time delay lenses: high resolution imaging requirements

    CERN Document Server

    Meng, Xiao-Lei; Agnello, Adriano; Auger, Matthew W; Liao, Kai; Marshall, Philip J

    2015-01-01

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as "Einstein Rings" in high resolution images. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope $\\gamma'$ of the...

  14. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  15. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Document Server

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  16. The role of interpreters in high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Naumann, Axel; /CERN; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  17. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  18. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Julio Dondo Gazzano

    2015-01-01

    Full Text Available FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC. The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  19. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  20. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  1. Precision cosmology with time delay lenses: High resolution imaging requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Xiao -Lei [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Treu, Tommaso [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Agnello, Adriano [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Auger, Matthew W. [Univ. of Cambridge, Cambridge (United Kingdom); Liao, Kai [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Marshall, Philip J. [Stanford Univ., Stanford, CA (United States)

    2015-09-28

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρtot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive

  2. Developing on-demand secure high-performance computing services for biomedical data analytics.

    Science.gov (United States)

    Robison, Nicholas; Anderson, Nick

    2013-01-01

    We propose a technical and process model to support biomedical researchers requiring on-demand high performance computing on potentially sensitive medical datasets. Our approach describes the use of cost-effective, secure and scalable techniques for processing medical information via protected and encrypted computing clusters within a model High Performance Computing (HPC) environment. The process model supports an investigator defined data analytics platform capable of accepting secure data migration from local clinical research data silos into a dedicated analytic environment, and secure environment cleanup upon completion. We define metrics to support the evaluation of this pilot model through performance and stability tests, and describe evaluation of its suitability towards enabling rapid deployment by individual investigators.

  3. Operational characterisation of requirements and early validation environment for high demanding space systems

    Science.gov (United States)

    Barro, E.; Delbufalo, A.; Rossi, F.

    1993-01-01

    The definition of some modern high demanding space systems requires a different approach to system definition and design from that adopted for traditional missions. System functionality is strongly coupled to the operational analysis, aimed at characterizing the dynamic interactions of the flight element with its surrounding environment and its ground control segment. Unambiguous functional, operational and performance requirements are to be defined for the system, thus improving also the successive development stages. This paper proposes a Petri Nets based methodology and two related prototype applications (to ARISTOTELES orbit control and to Hermes telemetry generation) for the operational analysis of space systems through the dynamic modeling of their functions and a related computer aided environment (ISIDE) able to make the dynamic model work, thus enabling an early validation of the system functional representation, and to provide a structured system requirements data base, which is the shared knowledge base interconnecting static and dynamic applications, fully traceable with the models and interfaceable with the external world.

  4. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  5. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    Energy Technology Data Exchange (ETDEWEB)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  6. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  7. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  8. High-throughput all-atom molecular dynamics simulations using distributed computing.

    Science.gov (United States)

    Buch, I; Harvey, M J; Giorgino, T; Anderson, D P; De Fabritiis, G

    2010-03-22

    Although molecular dynamics simulation methods are useful in the modeling of macromolecular systems, they remain computationally expensive, with production work requiring costly high-performance computing (HPC) resources. We review recent innovations in accelerating molecular dynamics on graphics processing units (GPUs), and we describe GPUGRID, a volunteer computing project that uses the GPU resources of nondedicated desktop and workstation computers. In particular, we demonstrate the capability of simulating thousands of all-atom molecular trajectories generated at an average of 20 ns/day each (for systems of approximately 30 000-80 000 atoms). In conjunction with a potential of mean force (PMF) protocol for computing binding free energies, we demonstrate the use of GPUGRID in the computation of accurate binding affinities of the Src SH2 domain/pYEEI ligand complex by reconstructing the PMF over 373 umbrella sampling windows of 55 ns each (20.5 mus of total data). We obtain a standard free energy of binding of -8.7 +/- 0.4 kcal/mol within 0.7 kcal/mol from experimental results. This infrastructure will provide the basis for a robust system for high-throughput accurate binding affinity prediction.

  9. Future materials requirements for high temperature power engineering components

    Energy Technology Data Exchange (ETDEWEB)

    Marriott, J.B. (Commission of the European Communities, Petten (Netherlands). Joint Nuclear Research Center)

    1989-08-01

    The two dominant technologies in power engineering are steam and gas turbines. These are, however, dependent on a prior stage of combustion and, perhaps, gasification. There is a continuous drive towards higher operating efficiencies and greater reliability of the units. This leads to a need for larger components to operate at higher temperatures and pressures and hence under more arduous conditions of mechanical and corrosive loading for times which may exceed 200,000 h (30 years). Some examples are used to illustrate generic features of the materials problems towards which research and development is aimed. In some components the high temperature time-dependent mechanical properties dominate, a good example being gas turbine blades. Uniformity of the time-dependent mechanical properties plus fracture toughness is difficult to attain in the very large forgings required for steam turbines. Within the heat generation units (boiler tubes, headers, etc.) the mechanical requirements are severe, but would not be critical without the constraints imposed by the need for inexpensive corrosion and erosion resistance. (author).

  10. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  11. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  12. Productive needs and training requirements: philosophy in high school

    Directory of Open Access Journals (Sweden)

    Marta Sueli de Faria Sforni

    2016-12-01

    Full Text Available The discipline of philosophy has not been a constant presence in the basic education curriculum throughout the history of Brazilian education. This inconsistency was the reason that triggered the need to conduct an investigation into the factors that determine its inclusion/exclusion in the Brazilian High School. The survey was conducted through a bibliographic and documental research in order to analyze the historical development of the discipline of philosophy in that level of education. The time period delimitation of the research was from the Colonial Brazil period to the insertion of Brazil in the neoliberal political-economic context. The study revealed that the oscillation of the presence of philosophy in the curricula of schools in Brazil was influenced by the needs of the productive sector, as the formation requirements that were started within the economic system.

  13. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  14. Iterative coupling reservoir simulation on high performance computers

    Institute of Scientific and Technical Information of China (English)

    Lu Bo; Wheeler Mary F

    2009-01-01

    In this paper, the iterative coupling approach is proposed for applications to solving multiphase flow equation systems in reservoir simulation, as it provides a more flexible time-stepping strategy than existing approaches. The iterative method decouples the whole equation systems into pressure and saturation/concentration equations, and then solves them in sequence, implicitly and semi-implicitly. At each time step, a series of iterations are computed, which involve solving linearized equations using specific tolerances that are iteration dependent. Following convergence of subproblems, material balance is checked. Convergence of time steps is based on material balance errors. Key components of the iterative method include phase scaling for deriving a pressure equation and use of several advanced numerical techniques. The iterative model is implemented for parallel computing platforms and shows high parallel efficiency and scalability.

  15. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  16. Computer aided seismic and fire retrofitting analysis of existing high rise reinforced concrete buildings

    CERN Document Server

    Hussain, Raja Rizwan; Hasan, Saeed

    2016-01-01

    This book details the analysis and design of high rise buildings for gravity and seismic analysis. It provides the knowledge structural engineers need to retrofit existing structures in order to meet safety requirements and better prevent potential damage from such disasters as earthquakes and fires. Coverage includes actual case studies of existing buildings, reviews of current knowledge for damages and their mitigation, protective design technologies, and analytical and computational techniques. This monograph also provides an experimental investigation on the properties of fiber reinforced concrete that consists of natural fibres like coconut coir and also steel fibres that are used for comparison in both Normal Strength Concrete (NSC) and High Strength Concrete (HSC). In addition, the authors examine the use of various repair techniques for damaged high rise buildings. The book will help upcoming structural design engineers learn the computer aided analysis and design of real existing high rise buildings ...

  17. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  18. Membrane requirements for high-flux and convective therapies.

    Science.gov (United States)

    Bowry, Sudhir Kumar

    2011-01-01

    Worldwide, high-flux dialysis (HF-HD) has now surpassed low-flux dialysis (LF-HD) as the predominant treatment modality, recognition that removal of larger uremic retention solutes is desirable for the treatment of patients with end-stage chronic kidney disease (CKD). An even more advanced form of HF-HD in terms of removal of a broad spectrum of uremic toxins is on-line hemodiafiltration (HDF), involving convective transport mechanisms for solute removal. With the modality reaching considerable technical maturity over the last two decades, on-line HDF is now recognized for its clinical efficiency and effectiveness, versatility and safety. Such has been the success of on-line HDF that, in Europe, more patients are treated with on-line HDF than even peritoneal dialysis. Fabrication of high-flux membranes for convective therapies is more than a matter of simply making the membrane 'more open' or of increasing the membrane pore size which is not the only determinant for achieving higher convection. While convective transport of larger uremic retention solutes primarily demands membranes with high hydraulic permeability and sieving capabilities, the making of a modern dialysis membrane involves several other considerations that culminate in the delivery of an effective and safe therapy. In this communication I outline the essential membrane requirements and principles for solute removal by convection, as well of meeting additional features related to the therapy. The basic principles of the membrane manufacturing processes by which desired membrane morphology is derived for the separation phenomena involved in dialysis are further described. An awareness of this enables one to appreciate that, depending on the individual constituents and variations of the manufacturing processes, fabrication of all high-flux membranes entails achieving a balance between the ideal or desired criteria for blood purification. Dialysis membranes for convective therapies, even from the same

  19. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  20. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  1. Arthroscopic and computer-assisted high tibial osteotomy using standard total knee arthroplasty navigation software.

    Science.gov (United States)

    Thompson, Stephen R; Zabtia, Nazar; Weening, Bradley; Zalzal, Paul

    2013-05-01

    Opening-wedge high tibial osteotomy is an increasingly performed procedure for treatment of varus gonarthrosis and correction of malalignment during meniscal transplantation or cartilage restoration. Precise preoperative planning and meticulous surgical technique are required to achieve an appropriate mechanical axis correction. We describe our technique of arthroscopic and computer-assisted high tibial osteotomy using commonly available total knee arthroplasty navigation software as an intraoperative goniometer. We believe that our technique, by providing intraoperative real-time guidance of the degree of correction that is accurate and reliable, represents a useful tool for the surgeon who uncommonly performs high tibial osteotomy.

  2. Terrestrial locomotion imposes high metabolic requirements on bats.

    Science.gov (United States)

    Voigt, Christian C; Borrisov, Ivailo M; Voigt-Heucke, Silke L

    2012-12-15

    The evolution of powered flight involved major morphological changes in Chiroptera. Nevertheless, all bats are also capable of crawling on the ground and some are even skilled sprinters. We asked if a highly derived morphology adapted for flapping flight imposes high metabolic requirements on bats when moving on the ground. We measured the metabolic rate during terrestrial locomotion in mastiff bats, Molossus currentium, a species that is both a fast-flying aerial-hawking bat and an agile crawler on the ground. Metabolic rates of bats averaged 8.0±4.0 ml CO(2) min(-1) during a 1-min period of sprinting at 1.3±0.6 km h(-1). With rising average speed, mean metabolic rates increased, reaching peak values that were similar to those of flying conspecifics. Metabolic rates of M. currentium were higher than those of similar-sized rodents that sprinted at similar velocities under steady-state conditions. When M. currentium sprinted at peak velocities, its aerobic metabolic rate was 3-5 times higher than those of rodent species running continuously in steady-state conditions. Costs of transport (J kg(-1) m(-1)) were more than 10 times higher for running than for flying bats. We conclude that at the same speed bats experience higher metabolic rates during short sprints than quadruped mammals during steady-state terrestrial locomotion, yet running bats achieve higher maximal mass-specific aerobic metabolic rates than non-volant mammals such as rodents.

  3. Chip-to-board interconnects for high-performance computing

    Science.gov (United States)

    Riester, Markus B. K.; Houbertz-Krauss, Ruth; Steenhusen, Sönke

    2013-02-01

    Super computing is reaching out to ExaFLOP processing speeds, creating fundamental challenges for the way that computing systems are designed and built. One governing topic is the reduction of power used for operating the system, and eliminating the excess heat generated from the system. Current thinking sees optical interconnects on most interconnect levels to be a feasible solution to many of the challenges, although there are still limitations to the technical solutions, in particular with regard to manufacturability. This paper explores drivers for enabling optical interconnect technologies to advance into the module and chip level. The introduction of optical links into High Performance Computing (HPC) could be an option to allow scaling the manufacturing technology to large volume manufacturing. This will drive the need for manufacturability of optical interconnects, giving rise to other challenges that add to the realization of this type of interconnection. This paper describes a solution that allows the creation of optical components on module level, integrating optical chips, laser diodes or PIN diodes as components much like the well known SMD components used for electrical components. The paper shows the main challenges and potential solutions to this challenge and proposes a fundamental paradigm shift in the manufacturing of 3-dimensional optical links for the level 1 interconnect (chip package).

  4. Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems.

    Science.gov (United States)

    Chiu, Matt; Herbordt, Martin C

    2010-11-01

    The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. We concentrate here on the MD kernel computation: determining the short-range force between particle pairs. In one part of the study, we systematically explore the design space of the force pipeline with respect to arithmetic algorithm, arithmetic mode, precision, and various other optimizations. We examine simplifications and find that some have little effect on simulation quality. In the other part, we present the first FPGA study of the filtering of particle pairs with nearly zero mutual force, a standard optimization in MD codes. There are several innovations, including a novel partitioning of the particle space, and new methods for filtering and mapping work onto the pipelines. As a consequence, highly efficient filtering can be implemented with only a small fraction of the FPGA's resources. Overall, we find that, for an Altera Stratix-III EP3ES260, 8 force pipelines running at nearly 200 MHz can fit on the FPGA, and that they can perform at 95% efficiency. This results in an 80-fold per core speed-up for the short-range force, which is likely to make FPGAs highly competitive for MD.

  5. Optimizing high performance computing workflow for protein functional annotation.

    Science.gov (United States)

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  6. Computational characterization of high temperature composites via METCAN

    Science.gov (United States)

    Brown, H. C.; Chamis, Christos C.

    1991-01-01

    The computer code 'METCAN' (METal matrix Composite ANalyzer) developed at NASA Lewis Research Center can be used to predict the high temperature behavior of metal matrix composites using the room temperature constituent properties. A reference manual that characterizes some common composites is being developed from METCAN generated data. Typical plots found in the manual are shown for graphite/copper. These include plots of stress-strain, elastic and shear moduli, Poisson's ratio, thermal expansion, and thermal conductivity. This manual can be used in the preliminary design of structures and as a guideline for the behavior of other composite systems.

  7. PRaVDA: High Energy Physics towards proton Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Price, T., E-mail: t.price@bham.ac.uk

    2016-07-11

    Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.

  8. Computational Proteomics: High-throughput Analysis for Systems Biology

    Energy Technology Data Exchange (ETDEWEB)

    Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

    2007-01-03

    High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

  9. New Generation Nuclear Plant -- High Level Functions and Requirements

    Energy Technology Data Exchange (ETDEWEB)

    J. M. Ryskamp; E. J. Gorski; E. A. Harvego; S. T. Khericha; G. A. Beitel

    2003-09-01

    This functions and requirements (F&R) document was prepared for the Next Generation Nuclear Plant (NGNP) Project. The highest-level functions and requirements for the NGNP preconceptual design are identified in this document, which establishes performance definitions for what the NGNP will achieve. NGNP designs will be developed based on these requirements by commercial vendor(s).

  10. SCEC Earthquake System Science Using High Performance Computing

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  11. A secure communications infrastructure for high-performance distributed computing

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Koenig, G.; Tuecke, S. [and others

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  12. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    Science.gov (United States)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will

  13. Molecular dynamics-based virtual screening: accelerating the drug discovery process by high-performance computing.

    Science.gov (United States)

    Ge, Hu; Wang, Yu; Li, Chanjuan; Chen, Nanhao; Xie, Yufang; Xu, Mengyan; He, Yingyan; Gu, Xinchun; Wu, Ruibo; Gu, Qiong; Zeng, Liang; Xu, Jun

    2013-10-28

    High-performance computing (HPC) has become a state strategic technology in a number of countries. One hypothesis is that HPC can accelerate biopharmaceutical innovation. Our experimental data demonstrate that HPC can significantly accelerate biopharmaceutical innovation by employing molecular dynamics-based virtual screening (MDVS). Without using HPC, MDVS for a 10K compound library with tens of nanoseconds of MD simulations requires years of computer time. In contrast, a state of the art HPC can be 600 times faster than an eight-core PC server is in screening a typical drug target (which contains about 40K atoms). Also, careful design of the GPU/CPU architecture can reduce the HPC costs. However, the communication cost of parallel computing is a bottleneck that acts as the main limit of further virtual screening improvements for drug innovations.

  14. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  15. Ubiquitous green computing techniques for high demand applications in Smart environments.

    Science.gov (United States)

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  16. A new massively parallel version of CRYSTAL for large systems on high performance computing architectures.

    Science.gov (United States)

    Orlando, Roberto; Delle Piane, Massimo; Bush, Ian J; Ugliengo, Piero; Ferrabone, Matteo; Dovesi, Roberto

    2012-10-30

    Fully ab initio treatment of complex solid systems needs computational software which is able to efficiently take advantage of the growing power of high performance computing (HPC) architectures. Recent improvements in CRYSTAL, a periodic ab initio code that uses a Gaussian basis set, allows treatment of very large unit cells for crystalline systems on HPC architectures with high parallel efficiency in terms of running time and memory requirements. The latter is a crucial point, due to the trend toward architectures relying on a very high number of cores with associated relatively low memory availability. An exhaustive performance analysis shows that density functional calculations, based on a hybrid functional, of low-symmetry systems containing up to 100,000 atomic orbitals and 8000 atoms are feasible on the most advanced HPC architectures available to European researchers today, using thousands of processors.

  17. Design requirements for high-efficiency high concentration ratio space solar cells

    Science.gov (United States)

    Rauschenbach, H.; Patterson, R.

    1980-01-01

    A miniaturized Cassegrainian concentrator system concept was developed for low cost, multikilowatt space solar arrays. The system imposes some requirements on solar cells which are new and different from those imposed for conventional applications. The solar cells require a circular active area of approximately 4 mm in diameter. High reliability contacts are required on both front and back surfaces. The back area must be metallurgically bonded to a heat sink. The cell should be designed to achieve the highest practical efficiency at 100 AMO suns and at 80 C. The cell design must minimize losses due to nonuniform illumination intensity and nonnormal light incidence. The primary radiation concern is the omnidirectional proton environment.

  18. High Energy Physics and Nuclear Physics Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli; Bauerdick, Lothar; Bell, Greg; Ciuffo, Leandro; Dasu, Sridhara; Dattoria, Vince; De, Kaushik; Ernst, Michael; Finkelson, Dale; Gottleib, Steven; Gutsche, Oliver; Habib, Salman; Hoeche, Stefan; Hughes-Jones, Richard; Ibarra, Julio; Johnston, William; Kisner, Theodore; Kowalski, Andy; Lauret, Jerome; Luitz, Steffen; Mackenzie, Paul; Maguire, Chales; Metzger, Joe; Monga, Inder; Ng, Cho-Kuen; Nielsen, Jason; Price, Larry; Porter, Jeff; Purschke, Martin; Rai, Gulshan; Roser, Rob; Schram, Malachi; Tull, Craig; Watson, Chip; Zurawski, Jason

    2014-03-02

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements needed by instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In August 2013, ESnet and the DOE SC Offices of High Energy Physics (HEP) and Nuclear Physics (NP) organized a review to characterize the networking requirements of the programs funded by the HEP and NP program offices. Several key findings resulted from the review. Among them: 1. The Large Hadron Collider?s ATLAS (A Toroidal LHC Apparatus) and CMS (Compact Muon Solenoid) experiments are adopting remote input/output (I/O) as a core component of their data analysis infrastructure. This will significantly increase their demands on the network from both a reliability perspective and a performance perspective. 2. The Large Hadron Collider (LHC) experiments (particularly ATLAS and CMS) are working to integrate network awareness into the workflow systems that manage the large number of daily analysis jobs (1 million analysis jobs per day for ATLAS), which are an integral part of the experiments. Collaboration with networking organizations such as ESnet, and the consumption of performance data (e.g., from perfSONAR [PERformance Service Oriented Network monitoring Architecture]) are critical to the success of these efforts. 3. The international aspects of HEP and NP collaborations continue to expand. This includes the LHC experiments, the Relativistic Heavy Ion Collider (RHIC) experiments, the Belle II Collaboration, the Large Synoptic Survey Telescope (LSST), and others. The international nature of these collaborations makes them heavily

  19. Microwave Tomographic Imaging of Cerebrovascular Accidents by Using High-Performance Computing

    CERN Document Server

    Tournier, P -H; Bonazzoli, M; de Buhan, M; Darbas, M; Dolean, V; Hecht, F; Jolivet, P; Kanfoud, I El; Migliaccio, C; Nataf, F; Pichot, C; Semenov, S

    2016-01-01

    The motivation of this work is the detection of cerebrovascular accidents by microwave tomographic imaging. This requires the solution of an inverse problem relying on a minimization algorithm (for example, gradient-based), where successive iterations consist in repeated solutions of a direct problem. The reconstruction algorithm is extremely computationally intensive and makes use of efficient parallel algorithms and high-performance computing. The feasibility of this type of imaging is conditioned on one hand by an accurate reconstruction of the material properties of the propagation medium and on the other hand by a considerable reduction in simulation time. Fulfilling these two requirements will enable a very rapid and accurate diagnosis. From the mathematical and numerical point of view, this means solving Maxwell's equations in time-harmonic regime by appropriate domain decomposition methods, which are naturally adapted to parallel architectures.

  20. Quantitative analysis of cholesteatoma using high resolution computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, Shigeru; Yamasoba, Tatsuya (Kameda General Hospital, Chiba (Japan)); Iinuma, Toshitaka

    1992-05-01

    Seventy-three cases of adult cholesteatoma, including 52 cases of pars flaccida type cholesteatoma and 21 of pars tensa type cholesteatoma, were examined using high resolution computed tomography, in both axial (lateral semicircular canal plane) and coronal sections (cochlear, vestibular and antral plane). These cases were classified into two subtypes according to the presence of extension of cholesteatoma into the antrum. Sixty cases of chronic otitis media with central perforation (COM) were also examined as controls. Various locations of the middle ear cavity were measured in terms of size in comparison with pars flaccida type cholesteatoma, pars tensa type cholesteatoma and COM. The width of the attic was significantly larger in both pars flaccida type and pars tensa type cholesteatoma than in COM. With pars flaccida type cholesteatoma there was a significantly larger distance between the malleus and lateral wall of the attic than with COM. In contrast, the distance between the malleus and medial wall of the attic was significantly larger with pars tensa type cholesteatoma than with COM. With cholesteatoma extending into the antrum, regardless of the type of cholesteatoma, there were significantly larger distances than with COM at the following sites: the width and height of the aditus ad antrum, and the width, height and anterior-posterior diameter of the antrum. However, these distances were not significantly different between cholesteatoma without extension into the antrum and COM. The hitherto demonstrated qualitative impressions of bone destruction in cholesteatoma were quantitatively verified in detail using high resolution computed tomography. (author).

  1. High performance parallel computing of flows in complex geometries: II. Applications

    Energy Technology Data Exchange (ETDEWEB)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F [Computational Fluid Dynamics Team, CERFACS, Toulouse, 31057 (France); Poinsot, T [Institut de Mecanique des Fluides de Toulouse, Toulouse, 31400 (France)], E-mail: Nicolas.gourdain@cerfacs.fr

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  2. Communication Requirements and Interconnect Optimization forHigh-End Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, Shoaib; Oliker, Leonid; Pinar, Ali; Shalf, John

    2007-11-12

    The path towards realizing peta-scale computing isincreasingly dependent on building supercomputers with unprecedentednumbers of processors. To prevent the interconnect from dominating theoverall cost of these ultra-scale systems, there is a critical need forhigh-performance network solutions whose costs scale linearly with systemsize. This work makes several unique contributions towards attaining thatgoal. First, we conduct one of the broadest studies to date of high-endapplication communication requirements, whose computational methodsinclude: finite-difference, lattice-bolzmann, particle in cell, sparselinear algebra, particle mesh ewald, and FFT-based solvers. Toefficiently collect this data, we use the IPM (Integrated PerformanceMonitoring) profiling layer to gather detailed messaging statistics withminimal impact to code performance. Using the derived communicationcharacterizations, we next present fit-trees interconnects, a novelapproach for designing network infrastructure at a fraction of thecomponent cost of traditional fat-tree solutions. Finally, we propose theHybrid Flexibly Assignable Switch Topology (HFAST) infrastructure, whichuses both passive (circuit) and active (packet) commodity switchcomponents to dynamically reconfigure interconnects to suit thetopological requirements of scientific applications. Overall ourexploration leads to a promising directions for practically addressingthe interconnect requirements of future peta-scale systems.

  3. Investigation of Vocational High-School Students' Computer Anxiety

    Science.gov (United States)

    Tuncer, Murat; Dogan, Yunus; Tanas, Ramazan

    2013-01-01

    With the advent of the computer technologies, we are increasingly encountering these technologies in every field of life. The fact that the computer technology is so much interwoven with the daily life makes it necessary to investigate certain psychological attitudes of those working with computers towards computers. As this study is limited to…

  4. High Hardware Utilization and Low Memory Block Requirement Decoding of QC-LDPC Codes

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ling; LIU Rongke; HOU Yi; ZHANG Xiaolin

    2012-01-01

    This paper presents a simple yet effective decoding for general quasi-cyclic low-density parity-check (QC-LDPC) codes,which not only achieves high hardware utility efficiency (HUE),but also brings about great memory block reduction without any performance degradation.The main idea is to split the check matrix into several row blocks,then to perform the improved message passing computations sequentially block by block.As the decoding algorithm improves,the sequential tie between the two-phase computations is broken,so that the two-phase computations can be overlapped which bring in high HUE.Two overlapping schemes are also presented,each of which suits a different situation.In addition,an efficient memory arrangement scheme is proposed to reduce the great memory block requirement of the LDPC decoder.As an example,for the 0.4 rate LDPC code selected from Chinese Digital TV Terrestrial Broadcasting (DTTB),our decoding saves over 80% memory blocks compared with the conventional decoding,and the decoder achieves 0.97 HUE.Finally,the 0.4 rate LDPC decoder is implemented on an FPGA device EP2S30 (speed grade-5).Using 8 row processing units,the decoder can achieve a maximum net throughput of 28.5 Mbps at 20 iterations.

  5. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  6. Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows

    Science.gov (United States)

    Ameri, Ali A.

    2016-01-01

    Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.

  7. Requirements for Control Room Computer-Based Procedures for use in Hybrid Control Rooms

    Energy Technology Data Exchange (ETDEWEB)

    Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-05-01

    Many plants in the U.S. are currently undergoing control room modernization. The main drivers for modernization are the aging and obsolescence of existing equipment, which typically results in a like-for-like replacement of analogue equipment with digital systems. However, the modernization efforts present an opportunity to employ advanced technology that would not only extend the life, but enhance the efficiency and cost competitiveness of nuclear power. Computer-based procedures (CBPs) are one example of near-term advanced technology that may provide enhanced efficiencies above and beyond like for like replacements of analog systems. Researchers in the LWRS program are investigating the benefits of advanced technologies such as CBPs, with the goal of assisting utilities in decision making during modernization projects. This report will describe the existing research on CBPs, discuss the unique issues related to using CBPs in hybrid control rooms (i.e., partially modernized analog control rooms), and define the requirements of CBPs for hybrid control rooms.

  8. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false High performance computers: Post... ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification... certain computers to destinations in Computer Tier 3, see § 740.7(d) for a list of these destinations...

  9. Real-time computer treatment of THz passive device images with the high image quality

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  10. Computation of High-Frequency Waves with Random Uncertainty

    KAUST Repository

    Malenova, Gabriela

    2016-01-06

    We consider the forward propagation of uncertainty in high-frequency waves, described by the second order wave equation with highly oscillatory initial data. The main sources of uncertainty are the wave speed and/or the initial phase and amplitude, described by a finite number of random variables with known joint probability distribution. We propose a stochastic spectral asymptotic method [1] for computing the statistics of uncertain output quantities of interest (QoIs), which are often linear or nonlinear functionals of the wave solution and its spatial/temporal derivatives. The numerical scheme combines two techniques: a high-frequency method based on Gaussian beams [2, 3], a sparse stochastic collocation method [4]. The fast spectral convergence of the proposed method depends crucially on the presence of high stochastic regularity of the QoI independent of the wave frequency. In general, the high-frequency wave solutions to parametric hyperbolic equations are highly oscillatory and non-smooth in both physical and stochastic spaces. Consequently, the stochastic regularity of the QoI, which is a functional of the wave solution, may in principle below and depend on frequency. In the present work, we provide theoretical arguments and numerical evidence that physically motivated QoIs based on local averages of |uE|2 are smooth, with derivatives in the stochastic space uniformly bounded in E, where uE and E denote the highly oscillatory wave solution and the short wavelength, respectively. This observable related regularity makes the proposed approach more efficient than current asymptotic approaches based on Monte Carlo sampling techniques.

  11. Computational modeling of high pressure combustion mechanism in scram accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.Y. [Pusan Nat. Univ. (Korea); Lee, B.J. [Pusan Nat. Univ. (Korea); Agency for Defense Development, Taejon (Korea); Jeung, I.S. [Pusan Nat. Univ. (Korea); Seoul National Univ. (Korea). Dept. of Aerospace Engineering

    2000-11-01

    A computational study was carried out to analyze a high-pressure combustion in scram accelerator. Fluid dynamic modeling was based on RANS equations for reactive flows, which were solved in a fully coupled manner using a fully implicit-upwind TVD scheme. For the accurate simulation of high-pressure combustion in ram accelerator, 9-species, 25-step fully detailed reaction mechanism was incorporated with the existing CFD code previously used for the ram accelerator studies. The mechanism is based on GRI-Mech. 2.11 that includes pressure-dependent reaction rate formulations indispensable for the correct prediction of induction time in high-pressure environment. A real gas equation of state was also included to account for molecular interactions and real gas effects of high-pressure gases. The present combustion modeling is compared with previous 8-step and 19-step mechanisms with ideal gas assumption. The result shows that mixture ignition characteristics are very sensitive to the combustion mechanisms, and different mechanism results in different reactive flow-field characteristics that have a significant relevance to the operation mode and the performance of scram accelerator. (orig.)

  12. Computational Fluid Dynamics Analysis of High Injection Pressure Blended Biodiesel

    Science.gov (United States)

    Khalid, Amir; Jaat, Norrizam; Faisal Hushim, Mohd; Manshoor, Bukhari; Zaman, Izzuddin; Sapit, Azwan; Razali, Azahari

    2017-08-01

    Biodiesel have great potential for substitution with petrol fuel for the purpose of achieving clean energy production and emission reduction. Among the methods that can control the combustion properties, controlling of the fuel injection conditions is one of the successful methods. The purpose of this study is to investigate the effect of high injection pressure of biodiesel blends on spray characteristics using Computational Fluid Dynamics (CFD). Injection pressure was observed at 220 MPa, 250 MPa and 280 MPa. The ambient temperature was kept held at 1050 K and ambient pressure 8 MPa in order to simulate the effect of boost pressure or turbo charger during combustion process. Computational Fluid Dynamics were used to investigate the spray characteristics of biodiesel blends such as spray penetration length, spray angle and mixture formation of fuel-air mixing. The results shows that increases of injection pressure, wider spray angle is produced by biodiesel blends and diesel fuel. The injection pressure strongly affects the mixture formation, characteristics of fuel spray, longer spray penetration length thus promotes the fuel and air mixing.

  13. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    Science.gov (United States)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  15. Highly versatile computer-controlled television detector system

    Science.gov (United States)

    Kalata, K.

    1982-01-01

    A description is presented of a television detector system which has been designed to accommodate a wide range of applications. It is currently being developed for use in X-ray diffraction, X-ray astrophysics, and electron microscopy, but it is also well suited for astronomical observations. The image can be integrated in a large, high-speed memory system, in the memory of a computer system, or the target of the TV tube or CCD array. The detector system consists of a continuously scanned, intensified SIT vidicon with scan and processing electronics which generate a digital image that is integrated in the detector memory. Attention is given to details regarding the camera system, scan control and image processing electronics, the memory system, and aspects of detector performance.

  16. Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, L D

    2004-10-26

    This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.

  17. A Component Architecture for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  18. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  19. GBU-X bounding requirements for highly flexible munitions

    Science.gov (United States)

    Bagby, Patrick T.; Shaver, Jonathan; White, Reed; Cafarelli, Sergio; Hébert, Anthony J.

    2017-04-01

    This paper will present the results of an investigation into requirements for existing software and hardware solutions for open digital communication architectures that support weapon subsystem integration. The underlying requirements of such a communication architecture would be to achieve the lowest latency possible at a reasonable cost point with respect to the mission objective of the weapon. The determination of the latency requirements of the open architecture software and hardware were derived through the use of control system and stability margins analyses. Studies were performed on the throughput and latency of different existing communication transport methods. The two architectures that were tested in this study include Data Distribution Service (DDS) and Modular Open Network Architecture (MONARCH). This paper defines what levels of latency can be achieved with current technology and how this capability may translate to future weapons. The requirements moving forward within communications solutions are discussed.

  20. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    Science The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the...System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase...Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research

  1. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    2016-06-01

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is the inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.

  2. High-Performance Special-Purpose Computers in Science

    OpenAIRE

    1998-01-01

    The next decade will be an exciting time for computational physicists. After 50 years of being forced to use standardized commercial equipment, it will finally become relatively straightforward to adapt one's computing tools to one's own needs. The breakthrough that opens this new era is the now wide-spread availability of programmable chips that allow virtually every computational scientist to design his or her own special-purpose computer.

  3. Development of a computer model to predict platform station keeping requirements in the Gulf of Mexico using remote sensing data

    Science.gov (United States)

    Barber, Bryan; Kahn, Laura; Wong, David

    1990-01-01

    Offshore operations such as oil drilling and radar monitoring require semisubmersible platforms to remain stationary at specific locations in the Gulf of Mexico. Ocean currents, wind, and waves in the Gulf of Mexico tend to move platforms away from their desired locations. A computer model was created to predict the station keeping requirements of a platform. The computer simulation uses remote sensing data from satellites and buoys as input. A background of the project, alternate approaches to the project, and the details of the simulation are presented.

  4. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  5. Confocal 3D DNA Cytometry: Assessment of Required Coefficient of Variation by Computer Simulation

    Directory of Open Access Journals (Sweden)

    Lennert S. Ploeger

    2004-01-01

    Full Text Available Background: Confocal Laser Scanning Microscopy (CLSM provides the opportunity to perform 3D DNA content measurements on intact cells in thick histological sections. So far, sample size has been limited by the time consuming nature of the technology. Since the power of DNA histograms to resolve different stemlines depends on both the sample size and the coefficient of variation (CV of histogram peaks, interpretation of 3D CLSM DNA histograms might be hampered by both a small sample size and a large CV. The aim of this study was to analyze the required CV for 3D CLSM DNA histograms given a realistic sample size. Methods: By computer simulation, virtual histograms were composed for sample sizes of 20000, 10000, 5000, 1000, and 273 cells and CVs of 30, 25, 20, 15, 10 and 5%. By visual inspection, the histogram quality with respect to resolution of G0/1 and G2/M peaks of a diploid stemline was assessed. Results: As expected, the interpretability of DNA histograms deteriorated with decreasing sample sizes and higher CVs. For CVs of 15% and lower, a clearly bimodal peak pattern with well distinguishable G0/1 and G2/M peaks were still seen at a sample size of 273 cells, which is our current average sample size with 3D CLSM DNA cytometry. Conclusions: For unambiguous interpretation of DNA histograms obtained using 3D CLSM, a CV of at most 15% is tolerable at currently achievable sample sizes. To resolve smaller near diploid stemlines, a CV of 10% or better should be aimed at. With currently available 3D imaging technology, this CV is achievable.

  6. High-throughput landslide modelling using computational grids

    Science.gov (United States)

    Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.

    2012-04-01

    Landslides are an increasing problem in developing countries. Multiple landslides can be triggered by heavy rainfall resulting in loss of life, homes and critical infrastructure. Through computer simulation of individual slopes it is possible to predict the causes, timing and magnitude of landslides and estimate the potential physical impact. Geographical scientists at the University of Bristol have developed software that integrates a physically-based slope hydrology and stability model (CHASM) with an econometric model (QUESTA) in order to predict landslide risk over time. These models allow multiple scenarios to be evaluated for each slope, accounting for data uncertainties, different engineering interventions, risk management approaches and rainfall patterns. Individual scenarios can be computationally intensive, however each scenario is independent and so multiple scenarios can be executed in parallel. As more simulations are carried out the overhead involved in managing input and output data becomes significant. This is a greater problem if multiple slopes are considered concurrently, as is required both for landslide research and for effective disaster planning at national levels. There are two critical factors in this context: generated data volumes can be in the order of tens of terabytes, and greater numbers of simulations result in long total runtimes. Users of such models, in both the research community and in developing countries, need to develop a means for handling the generation and submission of landside modelling experiments, and the storage and analysis of the resulting datasets. Additionally, governments in developing countries typically lack the necessary computing resources and infrastructure. Consequently, knowledge that could be gained by aggregating simulation results from many different scenarios across many different slopes remains hidden within the data. To address these data and workload management issues, University of Bristol particle

  7. Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    OpenAIRE

    Fedak, Gilles

    2015-01-01

    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has ...

  8. Design of a computer software for calculation of required barrier against radiation at the diagnostic x-ray units

    Directory of Open Access Journals (Sweden)

    S.A. Rahimi

    2005-01-01

    Full Text Available Background and purpose : Instalation of protective barrier against diagnostic x-ray is generally done based on the recommendations of NCRP49. There are analytic methods for designing protective barriers howerer, they lack sufficient efficiency and considering the NCRP49 reports, designing mechanical protective barrier in order to protect the initial x-ray radiation and absorption of the ray quality of such radiation is different.Therefore, the protective barrier for each radiation is measured separately. In this study, a computer software was designed to calculate the needed barrier with high accuracy.Materials and methods: Calculation of required protective barrier particularly when two or more generators are in use at diagnostic x-ray units and or installed diagnostic equipments do not have proper room space and the limitations for other clanges in parameters which are time- consuming and impossible to be manually calculated. For proper determination of thichness of the protective barrier, relevant information about curves of radiation weakness, dose limit etc should be entered. This program was done in windows and designed in such a way that the operator works easily, flexibility of the program is acceptable and its accuracy and sensitivity is high.Results : Results of this program indicate that, in most cases, in x-ray units required protective barrier was not used. Meanwhile sometimes shielding is more than what required which lacks technical standards and cost effectiveness. When the application index is contrasting zero, thichness of NCRP49 calculation is about 20% less than the calculated rate done by the method of this study. When the applied index is equal to zero (that is the only situation where the second barrier is considered, thickness of requined barrier is about 15% less than the lead barrier and concrete barrier calculated in this project is 8% less than that calculated by McGuire method.Conclusion : In this study proper

  9. Computer code to predict the heat of explosion of high energy materials

    Energy Technology Data Exchange (ETDEWEB)

    Muthurajan, H. [Armament Research and Development Establishment, Pashan, Pune 411021 (India)], E-mail: muthurajan_h@rediffmail.com; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B. [High Energy Materials Research Laboratory, Sutarwadi, Pune 411 021 (India)

    2009-01-30

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion ({delta}H{sub e}) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R{sup 2} = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.

  10. Computer code to predict the heat of explosion of high energy materials.

    Science.gov (United States)

    Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B

    2009-01-30

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.

  11. HiFi-MBQC High Fidelitiy Measurement-Based Quantum Computing using Superconducting Detectors

    Science.gov (United States)

    2016-04-04

    computer. We exploit the conceptual framework of measurement - based quantum computation that enables a client to delegate a computation to a quantum...AFRL-AFOSR-UK-TR-2016-0006 HiFi-MBQC High Fidelitiy Measurement - Based Quantum Computing using Superconducting Detectors Philip Walther UNIVERSITT...HiFi-MBQC High Fidelitiy Measurement - Based Quantum Computing using Superconducting Detectors 5a. CONTRACT NUMBER FA8655-11-1-3004 5b. GRANT NUMBER

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  14. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    Science.gov (United States)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  15. Comparison of High-Fidelity Computational Tools for Wing Design of a Distributed Electric Propulsion Aircraft

    Science.gov (United States)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Derlaga, Joseph M.; Stoll, Alex M.

    2017-01-01

    A variety of tools, from fundamental to high order, have been used to better understand applications of distributed electric propulsion to aid the wing and propulsion system design of the Leading Edge Asynchronous Propulsion Technology (LEAPTech) project and the X-57 Maxwell airplane. Three high-fidelity, Navier-Stokes computational fluid dynamics codes used during the project with results presented here are FUN3D, STAR-CCM+, and OVERFLOW. These codes employ various turbulence models to predict fully turbulent and transitional flow. Results from these codes are compared for two distributed electric propulsion configurations: the wing tested at NASA Armstrong on the Hybrid-Electric Integrated Systems Testbed truck, and the wing designed for the X-57 Maxwell airplane. Results from these computational tools for the high-lift wing tested on the Hybrid-Electric Integrated Systems Testbed truck and the X-57 high-lift wing presented compare reasonably well. The goal of the X-57 wing and distributed electric propulsion system design achieving or exceeding the required ?? (sub L) = 3.95 for stall speed was confirmed with all of the computational codes.

  16. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    Science.gov (United States)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  17. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  18. Building professional identity as computer science teachers: Supporting high school computer science teachers through reflection and community building

    Science.gov (United States)

    Ni, Lijun

    Computing education requires qualified computing teachers. The reality is that too few high schools in the U.S. have computing/computer science teachers with formal computer science (CS) training, and many schools do not have CS teacher at all. Moreover, teacher retention rate is often low. Beginning teacher attrition rate is particularly high in secondary education. Therefore, in addition to the need for preparing new CS teachers, we also need to support those teachers we have recruited and trained to become better teachers and continue to teach CS. Teacher education literature, especially teacher identity theory, suggests that a strong sense of teacher identity is a major indicator or feature of committed, qualified teachers. However, under the current educational system in the U.S., it could be challenging to establish teacher identity for high school (HS) CS teachers, e.g., due to a lack of teacher certification for CS. This thesis work centers upon understanding the sense of identity HS CS teachers hold and exploring ways of supporting their identity development through a professional development program: the Disciplinary Commons for Computing Educators (DCCE). DCCE has a major focus on promoting reflection on teaching practice and community building. With scaffolded activities such as course portfolio creation, peer review and peer observation among a group of HS CS teachers, it offers opportunities for CS teachers to explicitly reflect on and narrate their teaching, which is a central process of identity building through their participation within the community. In this thesis research, I explore the development of CS teacher identity through professional development programs. I first conducted an interview study with local HS CS teachers to understand their sense of identity and factors influencing their identity formation. I designed and enacted the professional program (DCCE) and conducted case studies with DCCE participants to understand how their

  19. Computational analysis of high-throughput flow cytometry data

    Science.gov (United States)

    Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo

    2015-01-01

    Introduction Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Areas covered Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. Expert opinion There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible. PMID:22708834

  20. Diagnostic value of high resolutional computed tomography of spine

    Energy Technology Data Exchange (ETDEWEB)

    Yang, S. M.; Im, S. K.; Sohn, M. H.; Lim, K. Y.; Kim, J. K.; Choi, K. C. [Jeonbug National University College of Medicine, Seoul (Korea, Republic of)

    1984-03-15

    Non-enhanced high resolution computed tomography provide clear visualization of soft tissue in the canal and bony details of spine, particularly of the lumbar spine. We observed 70 cases of spine CT using GE CT/T 8800 scanner during the period from Dec. 1982 to Sep. 1983 at Jeonbug National University Hospital. The results were as follows: 1. The sex distribution of cases were 55 males and 15 females : age was from 17 years to 67 years; sites were 11 cervical spine, 5 thoracic spine and 54 lumbosacral spine. 2. CT diagnosis showed 44 cases of lumbar disc herniation, 7 cases of degenerative disease, 3 cases of spine fracture and each 1 cases of cord tumor, metastatic tumor, spontaneous epidural hemorrhage, epidural abscess, spine tbc., meningocele with diastematomyelia. 3. Sites of herniated nucleus pulposus were 34 cases (59.6%) between L4-5 interspace and 20 cases (35.1%) between L5-S1 interspace. 13 cases (29.5%) of lumbar disc herniation disclosed multiple lesions. Location of herniation were central type in 28 cases(49.1%), right-central type in 12 cases(21.2%), left-central type in 11 cases (19.2%) and far lateral type in 6 cases(10.5%). 4. CT findings of herniated nucleus pulposus were as follows : focal protrusion of posterior disc margin and obliteration of anterior epidural fat in all cases, dural sac indentation in 26 cases(45.6%), soft tissue mass in epidural fat in 21 cases(36.8%), displacement or compression of nerve root sheath in 12 cases(21%). 5. Multiplanar reformatted images and Blink mode provide more effective evaluation about definite level and longitudinal dimension of lesion, such as obscure disc herniation, spine fracture, cord tumor and epidural abscess. 6. Non-enhanced and enhanced high resolutional computed tomography were effectively useful in demonstrating compression or displacement of spinal cord and nerve root, examing congenital anomaly such as meningocele and primary or metastatic spinal lesions.

  1. Numerical Computation of High Dimensional Solitons Via Drboux Transformation

    Institute of Scientific and Technical Information of China (English)

    ZixiangZHOU

    1997-01-01

    Darboux transformation gives explicit soliton solutions of nonlinear partial differential equations.Using numerical computation in each step of constructing Darboux transformation,one can get the graphs of the solitons practically,In n dimensions(n≥3),this method greatly increases the speed and deduces the memory usage of computation comparing to the software for algebraic computation.A technical problem concerning floating overflow is discussed.

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  3. Single High Fidelity Geometric Data Sets for LCM - Model Requirements

    Science.gov (United States)

    2006-11-01

    material name (example, an HY80 steel ) plus additional material requirements (heat treatment, etc.) Creation of a more detailed description of the data...57 Figure 2.22. Typical Stress-Strain Curve for Steel (adapted from Ref 59) .............................. 60 Figure...structures are steel , aluminum and composites. The structural components that make up a global FEA model drive the fidelity of the model. For example

  4. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    Science.gov (United States)

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  5. An Embedded System for applying High Performance Computing in Educational Learning Activity

    OpenAIRE

    Irene Erlyn Wina Rachmawan; Nurul Fahmi; Edi Wahyu Widodo; Samsul Huda; M. Unggul Pamenang; M. Choirur Roziqin; Andri Permana W.; Stritusta Sukaridhoto; Dadet Pramadihanto

    2016-01-01

    HPC (High Performance Computing) has become more popular in the last few years. With the benefits on high computational power, HPC has impact on industry, scientific research and educational activities. Implementing HPC as a curriculum in universities could be consuming a lot of resources because well-known HPC system are using Personal Computer or Server. By using PC as the practical moduls it is need great resources and spaces.  This paper presents an innovative high performance computing c...

  6. The Open Cloud Testbed: A Wide Area Testbed for Cloud Computing Utilizing High Performance Network Services

    CERN Document Server

    Grossman, Robert; Sabala, Michal; Bennet, Collin; Seidman, Jonathan; Mambratti, Joe

    2009-01-01

    Recently, a number of cloud platforms and services have been developed for data intensive computing, including Hadoop, Sector, CloudStore (formerly KFS), HBase, and Thrift. In order to benchmark the performance of these systems, to investigate their interoperability, and to experiment with new services based on flexible compute node and network provisioning capabilities, we have designed and implemented a large scale testbed called the Open Cloud Testbed (OCT). Currently the OCT has 120 nodes in four data centers: Baltimore, Chicago (two locations), and San Diego. In contrast to other cloud testbeds, which are in small geographic areas and which are based on commodity Internet services, the OCT is a wide area testbed and the four data centers are connected with a high performance 10Gb/s network, based on a foundation of dedicated lightpaths. This testbed can address the requirements of extremely large data streams that challenge other types of distributed infrastructure. We have also developed several utiliti...

  7. Computational aspects of hot-wire identification of thermal conductivity and diffusivity under high temperature

    Science.gov (United States)

    Vala, Jiří; Jarošová, Petra

    2016-07-01

    Development of advanced materials resistant to high temperature, needed namely for the design of heat storage for low-energy and passive buildings, requires simple, inexpensive and reliable methods of identification of their temperature-sensitive thermal conductivity and diffusivity, covering both well-advised experimental setting and implementation of robust and effective computational algorithms. Special geometrical configurations offer a possibility of quasi-analytical evaluation of temperature development for direct problems, whereas inverse problems of simultaneous evaluation of thermal conductivity and diffusivity must be handled carefully, using some least-squares (minimum variance) arguments. This paper demonstrates the proper mathematical and computational approach to such model problem, thanks to the radial symmetry of hot-wire measurements, including its numerical implementation.

  8. Computational Design of Metal-Organic Frameworks with High Methane Deliverable Capacity

    Science.gov (United States)

    Bao, Yi; Martin, Richard; Simon, Cory; Haranczyk, Maciej; Smit, Berend; Deem, Michael; Deem Team; Haranczyk Team; Smit Team

    Metal-organic frameworks (MOFs) are a rapidly emerging class of nanoporous materials with largely tunable chemistry and diverse applications in gas storage, gas purification, catalysis, etc. Intensive efforts are being made to develop new MOFs with desirable properties both experimentally and computationally in the past decades. To guide experimental synthesis with limited throughput, we develop a computational methodology to explore MOFs with high methane deliverable capacity. This de novo design procedure applies known chemical reactions, considers synthesizability and geometric requirements of organic linkers, and evolves a population of MOFs with desirable property efficiently. We identify about 500 MOFs with higher deliverable capacity than MOF-5 in 10 networks. We also investigate the relationship between deliverable capacity and internal surface area of MOFs. This methodology can be extended to MOFs with multiple types of linkers and multiple SBUs. DE-FG02- 12ER16362.

  9. A high-performance reconfigurable computing solution for Peptide mass fingerprinting.

    Science.gov (United States)

    Coca, Daniel; Bogdan, Istvan; Beynon, Robert J

    2010-01-01

    High-throughput, MS-based proteomics studies are generating very large volumes of biologically relevant data. Given the central role of proteomics in emerging fields such as system/synthetic biology and biomarker discovery, the amount of proteomic data is expected to grow at unprecedented rates over the next decades. At the moment, there is pressing need for high-performance computational solutions to accelerate the analysis and interpretation of this data.Performance gains achieved by grid computing in this area are not spectacular, especially given the significant power consumption, maintenance costs and floor space required by large server farms.This paper introduces an alternative, cost-effective high-performance bioinformatics solution for peptide mass fingerprinting based on Field Programmable Gate Array (FPGA) devices. At the heart of this approach stands the concept of mapping algorithms on custom digital hardware that can be programmed to run on FPGA. Specifically in this case, the entire computational flow associated with peptide mass fingerprinting, namely raw mass spectra processing and database searching, has been mapped on custom hardware processors that are programmed to run on a multi-FPGA system coupled with a conventional PC server. The system achieves an almost 2,000-fold speed-up when compared with a conventional implementation of the algorithms in software running on a 3.06 GHz Xeon PC server.

  10. Analysis of the computational requirements of a pulse-doppler radar signal processor

    CSIR Research Space (South Africa)

    Broich, R

    2012-05-01

    Full Text Available architectures [1]. These simplifications are often degrading to algorithmic performance and thus to the entire radar system. In this paper the different computational operations that are used in pulse-Doppler radar signal processing are explored, in order...H z to 10 GH z Fig. 1. Radar signal processor (RSP) flow of operations purpose computer architectures [3]. An abstract machine, in which only memory reads, writes, additions and multiplica- tions are considered to be significant operations...

  11. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    Science.gov (United States)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  12. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  13. High-definition three-dimensional television disparity map computation

    Science.gov (United States)

    Chammem, Afef; Mitrea, Mihai; Prêteux, Françoise

    2012-10-01

    By reconsidering some two-dimensional video inherited approaches and by adapting them to the stereoscopic video content and to the human visual system peculiarities, a new disparity map is designed. First, the inner relation between the left and the right views is modeled by some weights discriminating between the horizontal and vertical disparities. Second, the block matching operation is achieved by considering a visual related measure (normalized cross correlation) instead of the traditional pixel differences (mean squared error or sum of absolute differences). The advanced three-dimensional (3-D) video-new three step search (3DV-NTSS) disparity map (3-D Video-New Three Step Search) is benchmarked against two state-of-the-art algorithms, namely NTSS and full-search MPEG (FS-MPEG), by successively considering two corpora. The first corpus was organized during the 3DLive French national project and regroups 20 min of stereoscopic video sequences. The second one, with similar size, is provided by the MPEG community. The experimental results demonstrate the effectiveness of 3DV-NTSS in both reconstructed image quality (average gains between 3% and 7% in both PSNR and structural similarity, with a singular exception) and computational cost (search operation number reduced by average factors between 1.3 and 13). The 3DV-NTSS was finally validated by designing a watermarking method for high definition 3-D TV content protection.

  14. Pulmonary high-resolution computed tomography findings in nephropathia epidemica

    Energy Technology Data Exchange (ETDEWEB)

    Paakkala, Antti, E-mail: antti.paakkala@pshp.fi [Medical Imaging Centre, Tampere University Hospital, 33521 Tampere (Finland); Jaervenpaeae, Ritva, E-mail: ritva.jarvenpaa@pshp.fi [Medical Imaging Centre, Tampere University Hospital, 33521 Tampere (Finland); Maekelae, Satu, E-mail: satu.marjo.makela@uta.fi [Department of Internal Medicine, Tampere University Hospital, 33521 Tampere (Finland); Medical School, University of Tampere, 33521 Tampere (Finland); Huhtala, Heini, E-mail: heini.huhtala@uta.fi [School of Public Health, University of Tampere, 33521 Tampere (Finland); Mustonen, Jukka, E-mail: jukka.mustonen@uta.fi [Department of Internal Medicine, Tampere University Hospital, 33521 Tampere (Finland); Medical School, University of Tampere, 33521 Tampere (Finland)

    2012-08-15

    Purpose: To evaluate lung high-resolution computed tomography (HRCT) findings in patients with Puumala hantavirus-induced nephropathia epidemica (NE), and to determine if these findings correspond to chest radiograph findings. Materials and methods: HRCT findings and clinical course were studied in 13 hospital-treated NE patients. Chest radiograph findings were studied in 12 of them. Results: Twelve patients (92%) showed lung parenchymal abnormalities in HRCT, while only 8 had changes in their chest radiography. Atelectasis, pleural effusion, intralobular and interlobular septal thickening were the most common HRCT findings. Ground-glass opacification (GGO) was seen in 4 and hilar and mediastinal lymphadenopathy in 3 patients. Atelectasis and pleural effusion were also mostly seen in chest radiographs, other findings only in HRCT. Conclusion: Almost every NE patient showed lung parenchymal abnormalities in HRCT. The most common findings of lung involvement in NE can be defined as accumulation of pleural fluid and atelectasis and intralobular and interlobular septal thickening, most profusely in the lower parts of the lung. As a novel finding, lymphadenopathy was seen in a minority, probably related to capillary leakage and overall fluid overload. Pleural effusion is not the prominent feature in other viral pneumonias, whereas intralobular and interlobular septal thickening are characteristic of other viral pulmonary infections as well. Lung parenchymal findings in HRCT can thus be taken not to be disease-specific in NE and HRCT is useful only for scientific purposes.

  15. High Speed Computational Ghost Imaging via Spatial Sweeping

    Science.gov (United States)

    Wang, Yuwang; Liu, Yang; Suo, Jinli; Situ, Guohai; Qiao, Chang; Dai, Qionghai

    2017-01-01

    Computational ghost imaging (CGI) achieves single-pixel imaging by using a Spatial Light Modulator (SLM) to generate structured illuminations for spatially resolved information encoding. The imaging speed of CGI is limited by the modulation frequency of available SLMs, and sets back its practical applications. This paper proposes to bypass this limitation by trading off SLM’s redundant spatial resolution for multiplication of the modulation frequency. Specifically, a pair of galvanic mirrors sweeping across the high resolution SLM multiply the modulation frequency within the spatial resolution gap between SLM and the final reconstruction. A proof-of-principle setup with two middle end galvanic mirrors achieves ghost imaging as fast as 42 Hz at 80 × 80-pixel resolution, 5 times faster than state-of-the-arts, and holds potential for one magnitude further multiplication by hardware upgrading. Our approach brings a significant improvement in the imaging speed of ghost imaging and pushes ghost imaging towards practical applications. PMID:28358010

  16. High Performance Embedded Computing Software Initiative (HPEC-SI) Program Facilitation of VSIPL++ Standardization

    Science.gov (United States)

    2008-04-01

    parallel VSIPL++, and other parallel computing systems. The cluster is a fifty five node Beowulf style cluster with 116 compute processors of varying types...consoles, which GTRI inserted into to the parallel software testbed. A computer that is used as a compute node in a Beowulf -style cluster requires a... Beowulf -style cluster. GTRI also participated in technical advisory planning for the HPEC-SI program. 5. References 1. Schwartz, D. A ., Judd, R. R

  17. Using a Computer Animation to Teach High School Molecular Biology

    Science.gov (United States)

    Rotbain, Yosi; Marbach-Ad, Gili; Stavy, Ruth

    2008-01-01

    We present an active way to use a computer animation in secondary molecular genetics class. For this purpose we developed an activity booklet that helps students to work interactively with a computer animation which deals with abstract concepts and processes in molecular biology. The achievements of the experimental group were compared with those…

  18. Commodity CPU-GPU System for Low-Cost , High-Performance Computing

    Science.gov (United States)

    Wang, S.; Zhang, S.; Weiss, R. M.; Barnett, G. A.; Yuen, D. A.

    2009-12-01

    We have put together a desktop computer system for under 2.5 K dollars from commodity components that consist of one quad-core CPU (Intel Core 2 Quad Q6600 Kentsfield 2.4GHz) and two high end GPUs (nVidia's GeForce GTX 295 and Tesla C1060). A 1200 watt power supply is required. On this commodity system, we have constructed an easy-to-use hybrid computing environment, in which Message Passing Interface (MPI) is used for managing the working loads, for transferring the data among different GPU devices, and for minimizing the need of CPU’s memory. The test runs using the MAGMA (Matrix Algebra on GPU and Multicore Architectures) library show that the speed ups for double precision calculations can be greater than 10 (GPU vs. CPU) and they are bigger (> 20) for single precision calculations. In addition we have enabled the combination of Matlab with CUDA for interactive visualization through MPI, i.e., two GPU devices are used for simulation and one GPU device is used for visualizing the computing results as the simulation goes. Our experience with this commodity system has shown that running multiple applications on one GPU device or running one application across multiple GPU devices can be done as conveniently as on CPUs. With NVIDIA CEO Jen-Hsun Huang's claim that over the next 6 years GPU processing power will increase by 570x compared to the 3x for CPUs, future low-cost commodity computers such as ours may be a remedy for the long wait queues of the world's supercomputers, especially for small- and mid-scale computation. Our goal here is to explore the limits and capabilities of this emerging technology and to get ourselves ready to run large-scale simulations on the next generation of computing environment, which we believe will hybridize CPU and GPU architectures.

  19. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    Science.gov (United States)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  20. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  1. Path Not Found: Disparities in Access to Computer Science Courses in California High Schools

    Science.gov (United States)

    Martin, Alexis; McAlear, Frieda; Scott, Allison

    2015-01-01

    "Path Not Found: Disparities in Access to Computer Science Courses in California High Schools" exposes one of the foundational causes of underrepresentation in computing: disparities in access to computer science courses in California's public high schools. This report provides new, detailed data on these disparities by student body…

  2. The computer simulation of automobile use patterns for defining battery requirements for electric cars

    Science.gov (United States)

    Schwartz, H. J.

    1976-01-01

    A Monte Carlo simulation process was used to develop the U.S. daily range requirements for an electric vehicle from probability distributions of trip lengths and frequencies and average annual mileage data. The analysis shows that a car in the U.S. with a practical daily range of 82 miles (132 km) can meet the needs of the owner on 95% of the days of the year, or at all times other than his long vacation trips. Increasing the range of the vehicle beyond this point will not make it more useful to the owner because it will still not provide intercity transportation. A daily range of 82 miles can be provided by an intermediate battery technology level characterized by an energy density of 30 to 50 watt-hours per pound (66 to 110 W-hr/kg). Candidate batteries in this class are nickel-zinc, nickel-iron, and iron-air. The implication of these results for the research goals of far-term battery systems suggests a shift in emphasis toward lower cost and greater life and away from high energy density.

  3. X-ray beam-shaping via deformable mirrors: analytical computation of the required mirror profile

    CERN Document Server

    Spiga, Daniele; Svetina, Cristian; Zangrando, Marco; 10.1016/j.nima.2012.10.117

    2013-01-01

    X-ray mirrors with high focusing performances are in use in both mirror mod- ules for X-ray telescopes and in synchrotron and FEL (Free Electron Laser) beamlines. A degradation of the focus sharpness arises in general from geo- metrical deformations and surface roughness, the former usually described by geometrical optics and the latter by physical optics. In general, technological developments are aimed at a very tight focusing, which requires the mirror profile to comply with the nominal shape as much as possible and to keep the roughness at a negligible level. However, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators as done at the EIS-TIMEX beamline of FERMI@Elettra. The resulting profile can be characterized with a Long Trace Profilometer and correlated with the expected optical quality via a wavefront propagation code. However, if the roughness contribution can be neglected, the com- putation can be performed via a ray-tracin...

  4. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  5. Speeding up ecological and evolutionary computations in R; essentials of high performance computing for biologists.

    Science.gov (United States)

    Visser, Marco D; McMahon, Sean M; Merow, Cory; Dixon, Philip M; Record, Sydne; Jongejans, Eelke

    2015-03-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1-S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research.

  6. Design requirements, challenges, and solutions for high-temperature falling particle receivers

    Science.gov (United States)

    Christian, Joshua; Ho, Clifford

    2016-05-01

    Falling particle receivers (FPR) utilize small particles as a heat collecting medium within a cavity receiver structure. Previous analysis for FPR systems include computational fluid dynamics (CFD), analytical evaluations, and experiments to determine the feasibility and achievability of this CSP technology. Sandia National Laboratories has fabricated and tested a 1 MWth FPR that consists of a cavity receiver, top hopper, bottom hopper, support structure, particle elevator, flux target, and instrumentation. Design requirements and inherent challenges were addressed to enable continuous operation of flowing particles under high-flux conditions and particle temperatures over 700 °C. Challenges include being able to withstand extremely high temperatures (up to 1200°C on the walls of the cavity), maintaining particle flow and conveyance, measuring temperatures and mass flow rates, filtering out debris, protecting components from direct flux spillage, and measuring irradiance in the cavity. Each of the major components of the system is separated into design requirements, associated challenges and corresponding solutions. The intent is to provide industry and researchers with lessons learned to avoid pitfalls and technical problems encountered during the development of Sandia's prototype particle receiver system at the National Solar Thermal Test Facility (NSTTF).

  7. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  8. Computational design of high efficiency release targets for use at ISOL facilities

    CERN Document Server

    Liu, Y

    1999-01-01

    This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat-removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated-vitreous-carbon fiber (RVCF) or carbon-bonded-carbon fiber (CBCF) to form highly permeable composite target matrices. Computational studies that simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected t...

  9. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  10. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  11. Computed MISTR Requirement Changes and Parts Support - Analysis of a Mismatch.

    Science.gov (United States)

    1980-06-09

    the increased requirement of any item in the sample. Fourth, the D062 transaction register used by the ALCs EOQ IMSs to control their assets will be...checked and telephone calls to the appropriate Defense Logistics Agency (DLA) IMSs will be made to see if the MISTR item requirement changes (pen or...EOQ parts shortage. Those items that reflect pen or pencil reduction in require- ment will be searched for EOQ parts surpluses through the EOQ IMSs . It

  12. Superconductor Requirements and Characterization for High Field Accelerator Magnets

    Energy Technology Data Exchange (ETDEWEB)

    Barzi, E.; Zlobin, A. V.

    2015-05-01

    The 2014 Particle Physics Project Prioritization Panel (P5) strategic plan for U.S. High Energy Physics (HEP) endorses a continued world leadership role in superconducting magnet technology for future Energy Frontier Programs. This includes 10 to 15 T Nb3Sn accelerator magnets for LHC upgrades and a future 100 TeV scale pp collider, and as ultimate goal that of developing magnet technologies above 20 T based on both High Temperature Superconductors (HTS) and Low Temperature Superconductors (LTS) for accelerator magnets. To achieve these objectives, a sound conductor development and characterization program is needed and is herein described. This program is intended to be conducted in close collaboration with U.S. and International labs, Universities and Industry.

  13. Shear Reinforcement Requirements for High-Strength Concrete Bridge Girders

    OpenAIRE

    Ramirez, J. A.; Aguilar, Gerardo

    2005-01-01

    A research program was conducted on the shear strength of high-strength concrete members. The objective was to evaluate the shear behavior and strength of concrete bridge members with compressive strengths in the range of 10 000 to 15 000 psi. The goal was to determine if the current minimum amount of shear reinforcement together with maximum spacing limits in the 2004 AASHTO LRFD Specifications, and the upper limit on the nominal shear strength were applicable to concrete compressive strengt...

  14. High efficiency of collisional Penrose process requires heavy particle production

    CERN Document Server

    Ogasawara, Kota; Miyamoto, Umpei

    2015-01-01

    The center-of-mass energy of two particles can become arbitrarily large if they collide near the event horizon of an extremal Kerr black hole, which is called the Ba$\\rm \\tilde n$ados-Silk-West (BSW) effect. We consider such a high-energy collision of two particles which started from infinity and follow geodesics in the equatorial plane and investigate the energy extraction from such a high-energy particle collision and the production of particles in the equatorial plane. We analytically show that, on the one hand, if the produced particles are as massive as the colliding particles, the energy-extraction efficiency is bounded by $2.19$ approximately. On the other hand, if a very massive particle is to be produced as a result of the high-energy collision, which has negative energy and necessarily falls into the black hole, the upper limit of the energy-extraction efficiency is increased to $(2+\\sqrt{3})^2 \\simeq 13.9$. Thus, higher efficiency of the energy extraction, which is typically as large as 10, provide...

  15. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...

  16. Distributed metadata in a high performance computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  17. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  18. Role of high-performance computing in science education

    Energy Technology Data Exchange (ETDEWEB)

    Sabelli, N.H. (National Center for Supercomputing Applications, Champaign, IL (US))

    1991-01-01

    This article is a report on the continuing activities of a group committed to enhancing the development and use of computational science techniques in education. Interested readers are encouraged to contact members of the Steering Committee or the project coordinator.

  19. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort addresses a need for accurate computational models to support aeroassist and entry vehicle system design over a broad range of flight conditions...

  20. High Interactivity Visualization Software for Large Computational Data Sets Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a collection of computer tools and libraries called SciViz that enable researchers to visualize large scale data sets on HPC resources remotely...

  1. High-performance computing at NERSC: Present and future

    Energy Technology Data Exchange (ETDEWEB)

    Koniges, A.E.

    1995-07-01

    The author describes the new T3D parallel computer at NERSC. The adaptive mesh ICF3D code is one of the current applications being ported and developed for use on the T3D. It has been stressed in other papers in this proceedings that the development environment and tools available on the parallel computer is similar to any planned for the future including networks of workstations.

  2. Providing a computing environment for a high energy physics workshop

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, C.; Butler, J.; Carter, T.; DeMar, P.; Fagan, D.; Gibbons, R.; Grigaliunas, V.; Haibeck, M.; Haring, P.; Horvath, C.; Hughart, N.; Johnstad, H.; Jones, S.; Kreymer, A.; LeBrun, P.; Lego, A.; Leninger, M.; Loebel, L.; McNamara, S.; Nguyen, T.; Nicholls, J.; O' Reilly, C.; Pabrai, U.; Pfister, J.; Ritchie, D.; Roberts, L.; Sazama, C.; Wohlt, D. (Fermi National Accelerator Lab., Batavia, IL (USA)); Carven, R. (Wiscons

    1989-12-01

    Although computing facilities have been provided at conferences and workshops remote from the host institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail. This report documents the effort involved in providing a local computing facility with world-wide networking capability for a physics workshop so that we and others can benefit from the knowledge gained through the experience.

  3. High Performance Computing Innovation Service Portal Study (HPC-ISP)

    Science.gov (United States)

    2009-04-01

    based electronic commerce interface for the goods and services available through the brokerage service. This infrastructure will also support the... electronic commerce backend functionality for third parties that want to sell custom computing services. • Tailored Industry Portals are web portals for...broker shown in Figure 8 is essentially a web server that provides remote access to computing and software resources through an electronic commerce

  4. Design requirements and potential target users for brain-computer interfaces – recommendations from rehabilitation professionals

    NARCIS (Netherlands)

    Nijboer, F.; Plass-Oude Bos, D.; Blokland, Y.M.; Wijk, R. van; Farquhar, J.D.R.

    2014-01-01

    It is an implicit assumption in the field of brain-computer interfacing (BCI) that BCIs can be satisfactorily used to access augmentative and alternative communication (AAC) methods by people with severe physical disabilities. A one-day workshop and focus group interview was held to investigate this

  5. Computer-Based Instruction: A Background Paper on its Status, Cost/Effectiveness and Telecommunications Requirements.

    Science.gov (United States)

    Singh, Jai P.; Morgan, Robert P.

    In the slightly over twelve years since its inception, computer-based instruction (CBI) has shown the promise of being more cost-effective than traditional instruction for certain educational applications. Pilot experiments are underway to evaluate various CBI systems. Should these tests prove successful, a major problem confronting advocates of…

  6. Scheduling real-time indivisible loads with special resource allocation requirements on cluster computing

    Directory of Open Access Journals (Sweden)

    Abeer Hamdy

    2010-10-01

    Full Text Available The paper presents a heuristic algorithm to schedule real time indivisible loads represented as directed sequential task graph on a cluster computing. One of the cluster nodes has some special resources (denoted by special node that may be needed by one of the indivisible loads

  7. Issues in undergraduate education in computational science and high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Marchioro, T.L. II; Martin, D. [Ames Lab., IA (United States)

    1994-12-31

    The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computational science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?

  8. Surgical accuracy in high tibial osteotomy: coronal equivalence of computer navigation and gap measurement.

    Science.gov (United States)

    Schröter, S; Ihle, C; Elson, D W; Döbele, S; Stöckle, U; Ateschrang, A

    2016-11-01

    Medial opening wedge high tibial osteotomy (MOW HTO) is now a successful operation with a range of indications, requiring an individualised approach to the choice of intended correction. This manuscript introduces the concept of surgical accuracy as the absolute deviation of the achieved correction from the intended correction, where small values represent greater accuracy. Surgical accuracy is compared in a randomised controlled trial (RCT) between gap measurement and computer navigation groups. This was a prospective RCT conducted over 3 years of 120 consecutive patients with varus malalignment and medial compartment osteoarthritis, who underwent MOW HTO. All procedures were planned with digital software. Patients were randomly assigned into gap measurement or computer navigation groups. Coronal plane alignment was judged using the mechanical tibiofemoral angle (mTFA), before and after surgery. Absolute (positive) values were calculated for surgical accuracy in each individual case. There was no significant difference in the mean intended correction between groups. The achieved mTFA revealed a small under-correction in both groups. This was attributed to a failure to account for saw blade thickness (gap measurement) and over-compensation for weight bearing (computer navigation). Surgical accuracy was 1.7° ± 1.2° (gap measurement) compared to 2.1° ± 1.4° (computer navigation) without statistical significance. The difference in tibial slope increases of 2.7° ± 3.9° (gap measurement) and 2.1° ± 3.9° (computer navigation) had statistical significance (P osteotomy for individual cases. This work is clinically relevant because coronal surgical accuracy was not superior in either group. Therefore, the increased expense and surgical time associated with navigated MOW HTO is not supported, because meticulously conducted gap measurement yields equivalent surgical accuracy. I.

  9. Sympathetic Tone Induced by High Acoustic Tempo Requires Fast Respiration.

    Directory of Open Access Journals (Sweden)

    Ken Watanabe

    Full Text Available Many studies have revealed the influences of music, and particularly its tempo, on the autonomic nervous system (ANS and respiration patterns. Since there is the interaction between the ANS and the respiratory system, namely sympatho-respiratory coupling, it is possible that the effect of musical tempo on the ANS is modulated by the respiratory system. Therefore, we investigated the effects of the relationship between musical tempo and respiratory rate on the ANS. Fifty-two healthy people aged 18-35 years participated in this study. Their respiratory rates were controlled by using a silent electronic metronome and they listened to simple drum sounds with a constant tempo. We varied the respiratory rate-acoustic tempo combination. The respiratory rate was controlled at 15 or 20 cycles per minute (CPM and the acoustic tempo was 60 or 80 beats per minute (BPM or the environment was silent. Electrocardiograms and an elastic chest band were used to measure the heart rate and respiratory rate, respectively. The mean heart rate and heart rate variability (HRV were regarded as indices of ANS activity. We observed a significant increase in the mean heart rate and the low (0.04-0.15 Hz to high (0.15-0.40 Hz frequency ratio of HRV, only when the respiratory rate was controlled at 20 CPM and the acoustic tempo was 80 BPM. We suggest that the effect of acoustic tempo on the sympathetic tone is modulated by the respiratory system.

  10. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  11. High performance computing software package for multitemporal Remote-Sensing computations

    Directory of Open Access Journals (Sweden)

    Asaad Chahboun

    2010-10-01

    Full Text Available With the huge satellite data actually stored, remote sensing multitemporal study is nowadays one of the most challenging fields of computer science. The multicore hardware support and Multithreading can play an important role in speeding up algorithm computations. In the present paper, a software package (called Multitemporal Software Package for Satellite Remote sensing data (MSPSRS has been developed for the multitemporal treatment of satellite remote sensing images in a standard format. Due to portability intend, the interface was developed using the QT application framework and the core wasdeveloped integrating C++ classes. MSP.SRS can run under different operating systems (i.e., Linux, Mac OS X, Windows, Embedded Linux, Windows CE, etc.. Final benchmark results, using multiple remote sensing biophysical indices, show a gain up to 6X on a quad core i7 personal computer.

  12. High resolution weather data for urban hydrological modelling and impact assessment, ICT requirements and future challenges

    Science.gov (United States)

    ten Veldhuis, Marie-claire; van Riemsdijk, Birna

    2013-04-01

    Hydrological analysis of urban catchments requires high resolution rainfall and catchment information because of the small size of these catchments, high spatial variability of the urban fabric, fast runoff processes and related short response times. Rainfall information available from traditional radar and rain gauge networks does no not meet the relevant scales of urban hydrology. A new type of weather radars, based on X-band frequency and equipped with Doppler and dual polarimetry capabilities, promises to provide more accurate rainfall estimates at the spatial and temporal scales that are required for urban hydrological analysis. Recently, the RAINGAIN project was started to analyse the applicability of this new type of radars in the context of urban hydrological modelling. In this project, meteorologists and hydrologists work closely together in several stages of urban hydrological analysis: from the acquisition procedure of novel and high-end radar products to data acquisition and processing, rainfall data retrieval, hydrological event analysis and forecasting. The project comprises of four pilot locations with various characteristics of weather radar equipment, ground stations, urban hydrological systems, modelling approaches and requirements. Access to data processing and modelling software is handled in different ways in the pilots, depending on ownership and user context. Sharing of data and software among pilots and with the outside world is an ongoing topic of discussion. The availability of high resolution weather data augments requirements with respect to the resolution of hydrological models and input data. This has led to the development of fully distributed hydrological models, the implementation of which remains limited by the unavailability of hydrological input data. On the other hand, if models are to be used in flood forecasting, hydrological models need to be computationally efficient to enable fast responses to extreme event conditions. This

  13. A Computer Based Decision Support System for Tailoring Logistics Support Analysis Record (LSAR) Requirements

    Science.gov (United States)

    1989-09-01

    L-7190, Preliminary Maintenance Allocation Chart: The Preliminary Miantenance Allocation Chart ( PMAC ) is a list of all items, down to the lowest level...operations, and remarks required to explain the maintenance operations. The PMAC includes additional data (over and above the required MAC data) and may be...used to develop the MAC for the organizational technical manual. Use LSAR Input data records C, Dl, H, H1 to arrange the data in PMAC format and

  14. Computer program for calculating flow parameters and power requirements for cryogenic wind tunnels

    Science.gov (United States)

    Dress, D. A.

    1985-01-01

    A computer program has been written that performs the flow parameter calculations for cryogenic wind tunnels which use nitrogen as a test gas. The flow parameters calculated include static pressure, static temperature, compressibility factor, ratio of specific heats, dynamic viscosity, total and static density, velocity, dynamic pressure, mass-flow rate, and Reynolds number. Simplifying assumptions have been made so that the calculations of Reynolds number, as well as the other flow parameters can be made on relatively small desktop digital computers. The program, which also includes various power calculations, has been developed to the point where it has become a very useful tool for the users and possible future designers of fan-driven continuous-flow cryogenic wind tunnels.

  15. A Highly Efficient Parallel Algorithm for Computing the Fiedler Vector

    CERN Document Server

    Manguoglu, Murat

    2010-01-01

    The eigenvector corresponding to the second smallest eigenvalue of the Laplacian of a graph, known as the Fiedler vector, has a number of applications in areas that include matrix reordering, graph partitioning, protein analysis, data mining, machine learning, and web search. The computation of the Fiedler vector has been regarded as an expensive process as it involves solving a large eigenvalue problem. We present a novel and efficient parallel algorithm for computing the Fiedler vector of large graphs based on the Trace Minimization algorithm (Sameh, et.al). We compare the parallel performance of our method with a multilevel scheme, designed specifically for computing the Fiedler vector, which is implemented in routine MC73\\_Fiedler of the Harwell Subroutine Library (HSL). In addition, we compare the quality of the Fiedler vector for the application of weighted matrix reordering and provide a metric for measuring the quality of reordering.

  16. High-pressure fluid phase equilibria phenomenology and computation

    CERN Document Server

    Deiters, Ulrich K

    2012-01-01

    The book begins with an overview of the phase diagrams of fluid mixtures (fluid = liquid, gas, or supercritical state), which can show an astonishing variety when elevated pressures are taken into account; phenomena like retrograde condensation (single and double) and azeotropy (normal and double) are discussed. It then gives an introduction into the relevant thermodynamic equations for fluid mixtures, including some that are rarely found in modern textbooks, and shows how they can they be used to compute phase diagrams and related properties. This chapter gives a consistent and axiomatic approach to fluid thermodynamics; it avoids using activity coefficients. Further chapters are dedicated to solid-fluid phase equilibria and global phase diagrams (systematic search for phase diagram classes). The appendix contains numerical algorithms needed for the computations. The book thus enables the reader to create or improve computer programs for the calculation of fluid phase diagrams. introduces phase diagram class...

  17. A PROFICIENT MODEL FOR HIGH END SECURITY IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    R. Bala Chandar

    2014-01-01

    Full Text Available Cloud computing is an inspiring technology due to its abilities like ensuring scalable services, reducing the anxiety of local hardware and software management associated with computing while increasing flexibility and scalability. A key trait of the cloud services is remotely processing of data. Even though this technology had offered a lot of services, there are a few concerns such as misbehavior of server side stored data , out of control of data owner's data and cloud computing does not control the access of outsourced data desired by the data owner. To handle these issues, we propose a new model to ensure the data correctness for assurance of stored data, distributed accountability for authentication and efficient access control of outsourced data for authorization. This model strengthens the correctness of data and helps to achieve the cloud data integrity, supports data owner to have control on their own data through tracking and improves the access control of outsourced data.

  18. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    Science.gov (United States)

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  19. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  20. Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation

    Science.gov (United States)

    2016-11-01

    ARL-TR-7873 ● NOV 2016 US Army Research Laboratory Application of High Performance Computing for Simulations of N -Dodecane Jet...US Army Research Laboratory Application of High Performance Computing for Simulations of N -Dodecane Jet Spray with Evaporation by Luis...TITLE AND SUBTITLE Application of High Performance Computing for Simulations of N -Dodecane Jet Spray with Evaporation 5a. CONTRACT NUMBER 5b

  1. Analog computation through high-dimensional physical chaotic neuro-dynamics

    Science.gov (United States)

    Horio, Yoshihiko; Aihara, Kazuyuki

    2008-07-01

    Conventional von Neumann computers have difficulty in solving complex and ill-posed real-world problems. However, living organisms often face such problems in real life, and must quickly obtain suitable solutions through physical, dynamical, and collective computations involving vast assemblies of neurons. These highly parallel computations through high-dimensional dynamics (computation through dynamics) are completely different from the numerical computations on von Neumann computers (computation through algorithms). In this paper, we explore a novel computational mechanism with high-dimensional physical chaotic neuro-dynamics. We physically constructed two hardware prototypes using analog chaotic-neuron integrated circuits. These systems combine analog computations with chaotic neuro-dynamics and digital computation through algorithms. We used quadratic assignment problems (QAPs) as benchmarks. The first prototype utilizes an analog chaotic neural network with 800-dimensional dynamics. An external algorithm constructs a solution for a QAP using the internal dynamics of the network. In the second system, 300-dimensional analog chaotic neuro-dynamics drive a tabu-search algorithm. We demonstrate experimentally that both systems efficiently solve QAPs through physical chaotic dynamics. We also qualitatively analyze the underlying mechanism of the highly parallel and collective analog computations by observing global and local dynamics. Furthermore, we introduce spatial and temporal mutual information to quantitatively evaluate the system dynamics. The experimental results confirm the validity and efficiency of the proposed computational paradigm with the physical analog chaotic neuro-dynamics.

  2. Computer program for high pressure real gas effects

    Science.gov (United States)

    Johnson, R. C.

    1969-01-01

    Computer program obtains the real-gas isentropic flow functions and thermodynamic properties of gases for which the equation of state is known. The program uses FORTRAN 4 subroutines which were designed for calculations of nitrogen and helium. These subroutines are easily modified for calculations of other gases.

  3. From needs to requirements for computer systems: the added value of ergonomics in needs analysis.

    Science.gov (United States)

    Couix, Stanislas; Darses, Françoise; De-La-Garza, Cecilia

    2012-01-01

    It is widely recognised that ergonomists must contribute during needs analysis. However, few studies have investigated the specific contributions of ergonomists at this stage of the design process. In this study, this contribution is studied through the requirement document produced by the design team. For each requirement, the source (i.e. who formulated the requirement), justification (why the requirement is needed), type (functional, interaction, operational, physical, organizational), and scope (entire system or part thereof) were analysed. Results indicate that the various actors are complementary and work collectively to define the various dimensions of the system. With end-users, the ergonomist worked on the global aspects of the system: function, conditions of use and organizational dimension. Alone, he defined the global interaction of the system. The various functions derived from the global function were defined in collaboration with engineers. However, while engineers contributed to defining how these functions would work, as well as their technical conditions of use, the ergonomist focused on their purpose, and, with end-users, on their organizational aspects. Finally, results suggest that neither the ergonomist's specific knowledge in ergonomics, nor work analysis were sufficient to derive his requirements; both are mandatory.

  4. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    Science.gov (United States)

    Trudgian, David C; Mirzaei, Hamid

    2012-12-07

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.

  5. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    Science.gov (United States)

    Kazakov, Artem; Furukawa, Kazuro

    2010-11-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  6. A survey on resource allocation in high performance distributed computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul; Khan, Samee Ullah; Bickler, Gage; Min-Allah, Nasro; Qureshi, Muhammad Bilal; Zhang, Limin; Yongji, Wang; Ghani, Nasir; Kolodziej, Joanna; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal; Li, Hongxiang; Wang, Lizhe; Chen, Dan; Rayes, Ammar

    2013-11-01

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.

  7. Ftklipse - Design and Implementation of an Extendable Computer Forensics Environment: Software Requirements Specification Document

    CERN Document Server

    Laverdière, Marc-André; Tsapa, Suhasini; Benredjem, Djamel

    2009-01-01

    The purpose behind this article is to describe the features of Ftklipse, an extendable platform for computer forensics. This document designed to provide a detailed specification for the developers of Ftklipse. Ftklipse is a thick-client solution for forensics investigation. It is designed to collect and preserve evidence, to analyze it and to report on it. It supports chain of custody management, access control policies, and batch operation of its included tools in order to facilitate and accelerate the investigation. The environment itself and its tools are configurable as well and is based on Eclipse.

  8. Central Issues in the Use of Computer-Based Materials for High Volume Entrepreneurship Education

    Science.gov (United States)

    Cooper, Billy

    2007-01-01

    This article discusses issues relating to the use of computer-based learning (CBL) materials for entrepreneurship education at university level. It considers CBL as a means of addressing the increased volume and range of provision required in the current context. The issues raised in this article have importance for all forms of computer-based…

  9. A heterogeneous and parallel computing framework for high-resolution hydrodynamic simulations

    Science.gov (United States)

    Smith, Luke; Liang, Qiuhua

    2015-04-01

    Shock-capturing hydrodynamic models are now widely applied in the context of flood risk assessment and forecasting, accurately capturing the behaviour of surface water over ground and within rivers. Such models are generally explicit in their numerical basis, and can be computationally expensive; this has prohibited full use of high-resolution topographic data for complex urban environments, now easily obtainable through airborne altimetric surveys (LiDAR). As processor clock speed advances have stagnated in recent years, further computational performance gains are largely dependent on the use of parallel processing. Heterogeneous computing architectures (e.g. graphics processing units or compute accelerator cards) provide a cost-effective means of achieving high throughput in cases where the same calculation is performed with a large input dataset. In recent years this technique has been applied successfully for flood risk mapping, such as within the national surface water flood risk assessment for the United Kingdom. We present a flexible software framework for hydrodynamic simulations across multiple processors of different architectures, within multiple computer systems, enabled using OpenCL and Message Passing Interface (MPI) libraries. A finite-volume Godunov-type scheme is implemented using the HLLC approach to solving the Riemann problem, with optional extension to second-order accuracy in space and time using the MUSCL-Hancock approach. The framework is successfully applied on personal computers and a small cluster to provide considerable improvements in performance. The most significant performance gains were achieved across two servers, each containing four NVIDIA GPUs, with a mix of K20, M2075 and C2050 devices. Advantages are found with respect to decreased parametric sensitivity, and thus in reducing uncertainty, for a major fluvial flood within a large catchment during 2005 in Carlisle, England. Simulations for the three-day event could be performed

  10. Increased insulin requirements during exercise at very high altitude in type 1 diabetes

    NARCIS (Netherlands)

    de Mol, Pieter; de Vries, Suzanna T.; de Koning, Eelco J. P.; Gans, Rijk O. B.; Tack, Cees J.; Bilo, Henk J. G.

    2011-01-01

    OBJECTIVE-Safe, very high altitude trekking in subjects with type 1 diabetes requires understanding of glucose regulation at high altitude. We investigated insulin requirements, energy expenditure, and glucose levels at very high altitude in relation to acute mountain sickness (AMS) symptoms in indi

  11. Increased insulin requirements during exercise at very high altitude in type 1 diabetes

    NARCIS (Netherlands)

    Mol, P. De; Vries, S.T. de; Koning, E.J. de; Gans, R.O.; Tack, C.J.J.; Bilo, H.J.

    2011-01-01

    OBJECTIVE: Safe, very high altitude trekking in subjects with type 1 diabetes requires understanding of glucose regulation at high altitude. We investigated insulin requirements, energy expenditure, and glucose levels at very high altitude in relation to acute mountain sickness (AMS) symptoms in ind

  12. Increased insulin requirements during exercise at very high altitude in type 1 diabetes

    NARCIS (Netherlands)

    Mol, P. De; Vries, S.T. de; Koning, E.J. de; Gans, R.O.; Tack, C.J.J.; Bilo, H.J.

    2011-01-01

    OBJECTIVE: Safe, very high altitude trekking in subjects with type 1 diabetes requires understanding of glucose regulation at high altitude. We investigated insulin requirements, energy expenditure, and glucose levels at very high altitude in relation to acute mountain sickness (AMS) symptoms in

  13. Increased insulin requirements during exercise at very high altitude in type 1 diabetes

    NARCIS (Netherlands)

    de Mol, Pieter; de Vries, Suzanna T.; de Koning, Eelco J. P.; Gans, Rijk O. B.; Tack, Cees J.; Bilo, Henk J. G.

    OBJECTIVE-Safe, very high altitude trekking in subjects with type 1 diabetes requires understanding of glucose regulation at high altitude. We investigated insulin requirements, energy expenditure, and glucose levels at very high altitude in relation to acute mountain sickness (AMS) symptoms in

  14. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Device Status Data

    Science.gov (United States)

    2015-09-01

    5.1.1 Basic Components The Hydra data processing framework provides an object - oriented hierarchy for organizing data processing within an HPC...ARL-CR-0780 ● SEP 2015 US Army Research Laboratory High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing...ARL-CR-0780 ● SEP 2015 US Army Research Laboratory High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC

  15. Allocating Tactical High-Performance Computer (HPC) Resources to Offloaded Computation in Battlefield Scenarios

    Science.gov (United States)

    2013-12-01

    devices. Offloading solutions such as Cuckoo (12), MAUI(13), COMET(14), and ThinkAir(15) offload applications via Wi-Fi or 3G networks to servers or...Soldier Smartphone Program. Information Week, 2010. 12. Kemp, R.; Palmer, N.; Kielmann, T.; Bal, H. Cuckoo : A Computation Offloading Framework for...ARMY RESEARCH LAB RDRL CIH S TAMIM SOOKOOR DALE SHIRES DAVID BRUNO RONDA TAYLOR SONG PARK 20 INTENTIONALLY LEFT BLANK. 21

  16. Computational study of developing high-quality decision trees

    Science.gov (United States)

    Fu, Zhiwei

    2002-03-01

    Recently, decision tree algorithms have been widely used in dealing with data mining problems to find out valuable rules and patterns. However, scalability, accuracy and efficiency are significant concerns regarding how to effectively deal with large and complex data sets in the implementation. In this paper, we propose an innovative machine learning approach (we call our approach GAIT), combining genetic algorithm, statistical sampling, and decision tree, to develop intelligent decision trees that can alleviate some of these problems. We design our computational experiments and run GAIT on three different data sets (namely Socio- Olympic data, Westinghouse data, and FAA data) to test its performance against standard decision tree algorithm, neural network classifier, and statistical discriminant technique, respectively. The computational results show that our approach outperforms standard decision tree algorithm profoundly at lower sampling levels, and achieves significantly better results with less effort than both neural network and discriminant classifiers.

  17. A High Performance SOAP Engine for Grid Computing

    Science.gov (United States)

    Wang, Ning; Welzl, Michael; Zhang, Liang

    Web Service technology still has many defects that make its usage for Grid computing problematic, most notably the low performance of the SOAP engine. In this paper, we develop a novel SOAP engine called SOAPExpress, which adopts two key techniques for improving processing performance: SCTP data transport and dynamic early binding based data mapping. Experimental results show a significant and consistent performance improvement of SOAPExpress over Apache Axis.

  18. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    Directory of Open Access Journals (Sweden)

    Xiaoquan Su

    Full Text Available The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample. With the advancement of Next Generation Sequencing (NGS techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  19. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    Science.gov (United States)

    Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  20. Simulation of cardiac electrophysiology on next-generation high-performance computers.

    Science.gov (United States)

    Bordas, Rafel; Carpentieri, Bruno; Fotia, Giorgio; Maggio, Fabio; Nobes, Ross; Pitt-Francis, Joe; Southern, James

    2009-05-28

    Models of cardiac electrophysiology consist of a system of partial differential equations (PDEs) coupled with a system of ordinary differential equations representing cell membrane dynamics. Current software to solve such models does not provide the required computational speed for practical applications. One reason for this is that little use is made of recent developments in adaptive numerical algorithms for solving systems of PDEs. Studies have suggested that a speedup of up to two orders of magnitude is possible by using adaptive methods. The challenge lies in the efficient implementation of adaptive algorithms on massively parallel computers. The finite-element (FE) method is often used in heart simulators as it can encapsulate the complex geometry and small-scale details of the human heart. An alternative is the spectral element (SE) method, a high-order technique that provides the flexibility and accuracy of FE, but with a reduced number of degrees of freedom. The feasibility of implementing a parallel SE algorithm based on fully unstructured all-hexahedra meshes is discussed. A major computational task is solution of the large algebraic system resulting from FE or SE discretization. Choice of linear solver and preconditioner has a substantial effect on efficiency. A fully parallel implementation based on dynamic partitioning that accounts for load balance, communication and data movement costs is required. Each of these methods must be implemented on next-generation supercomputers in order to realize the necessary speedup. The problems that this may cause, and some of the techniques that are beginning to be developed to overcome these issues, are described.

  1. Improving the Air Force’s Computation of Spares Requirements: The Effects of Engines.

    Science.gov (United States)

    1986-12-01

    11jIn addition to quantifying the effects of engines, this report suggests specific ways in which engines can be included in the availabilty -based...consideration is accurate portrayal of the indenture structure for engine components. Despite their physical attachment to the engine, some - notably fuel ...prorated fractions (based on usage factors) of the demands and assets applicable to the weapon systems of interest. Nondemand-based requirements, such as

  2. The computer simulation of automobile use patterns for defining battery requirements for electric cars

    Science.gov (United States)

    Schwartz, H.-J.

    1976-01-01

    The modeling process of a complex system, based on the calculation and optimization of the system parameters, is complicated in that some parameters can be expressed only as probability distributions. In the present paper, a Monte Carlo technique was used to determine the daily range requirements of an electric road vehicle in the United States from probability distributions of trip lengths, frequencies, and average annual mileage data. The analysis shows that a daily range of 82 miles meets to 95% of the car-owner requirements at all times with the exception of long vacation trips. Further, it is shown that the requirement of a daily range of 82 miles can be met by a (intermediate-level) battery technology characterized by an energy density of 30 to 50 Watt-hours per pound. Candidate batteries in this class are nickel-zinc, nickel-iron, and iron-air. These results imply that long-term research goals for battery systems should be focused on lower cost and longer service life, rather than on higher energy densities

  3. Study of application technology of ultra-high speed computer to the elucidation of complex phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Sekiguchi, Tomotsugu [Electrotechnical Lab., Tsukuba, Ibaraki (Japan)

    1996-06-01

    The basic design of numerical information library in the decentralized computer network was explained at the first step of constructing the application technology of ultra-high speed computer to the elucidation of complex phenomena. Establishment of the system makes possible to construct the efficient application environment of ultra-high speed computer system to be scalable with the different computing systems. We named the system Ninf (Network Information Library for High Performance Computing). The summary of application technology of library was described as follows: the application technology of library under the distributed environment, numeric constants, retrieval of value, library of special functions, computing library, Ninf library interface, Ninf remote library and registration. By the system, user is able to use the program concentrating the analyzing technology of numerical value with high precision, reliability and speed. (S.Y.)

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  5. Adachi-like chaotic neural networks requiring linear-time computations by enforcing a tree-shaped topology.

    Science.gov (United States)

    Qin, Ke; Oommen, B John

    2009-11-01

    The Adachi neural network (AdNN) is a fascinating neural network (NN) which has been shown to possess chaotic properties, and to also demonstrate associative memory (AM) and pattern recognition (PR) characteristics. Variants of the AdNN have also been used to obtain other PR phenomena, and even blurring. An unsurmountable problem associated with the AdNN and the variants referred to above is that all of them require a quadratic number of computations. This is essentially because the NNs in each case are completely connected graphs. In this paper, we consider how the computations can be significantly reduced by merely using a linear number of computations. To achieves this, we extract from the original completely connected graph one of its spanning trees. We then address the problem of computing the weights for this spanning tree. This is done in such a manner that the modified tree-based NN has approximately the same input-output characteristics, and thus the new weights are themselves calculated using a gradient-based algorithm. By a detailed experimental analysis, we show that the new linear-time AdNN-like network possesses chaotic and PR properties for different settings. As far as we know, such a tree-based AdNN has not been reported, and the results given here are novel.

  6. Evaluation of Computational Method of High Reynolds Number Slurry Flow for Caverns Backfilling

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-05-01

    The abandonment of salt caverns used for brining or product storage poses a significant environmental and economic risk. Risk mitigation can in part be address ed by the process of backfilling which can improve the cavern geomechanical stability and reduce the risk o f fluid loss to the environment. This study evaluate s a currently available computational tool , Barracuda, to simulate such process es as slurry flow at high Reynolds number with high particle loading . Using Barracuda software, a parametric sequence of simu lations evaluated slurry flow at Re ynolds number up to 15000 and loading up to 25%. Li mitations come into the long time required to run these simulation s due in particular to the mesh size requirement at the jet nozzle. This study has found that slurry - jet width and centerline velocities are functions of Re ynold s number and volume fractio n The solid phase was found to spread less than the water - phase with a spreading rate smaller than 1 , dependent on the volume fraction. Particle size distribution does seem to have a large influence on the jet flow development. This study constitutes a first step to understand the behavior of highly loaded slurries and their ultimate application to cavern backfilling.

  7. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  8. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  9. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  10. Report of the Snowmass T7 working group on high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    K. Ko; R. Ryne; P. Spentzouris

    2002-12-05

    The T7 Working Group on High Performance Computing (HPC) had more than 30 participants. During the three weeks at Snowmass there were about 30 presentations. This working group also had joint sessions with a number of other working groups, including E1 (Neutrino Factories and Muon Colliders), M1 (Muon Based Systems), M6 (High Intensity Proton Sources), T4 (Particle sources), T5 (Beam dynamics), and T8 (Advanced Accelerators). The topics that were discussed fall naturally into three areas: (1) HPC requirements for next-generation accelerator design, (2) state-of-the-art in HPC simulation of accelerator systems, and (3) applied mathematics and computer science activities related to the development of HPC tools that will be of use to the accelerator community (as well as other communities). This document summarizes the material mentioned above and includes recommendations for future HPC activities in the accelerator community. The relationship of those activities to the HENP/SciDAC project on 21st century accelerator simulation is also discussed.

  11. REPORT OF THE SNOWMASS T7 WORKING GROUP ON HIGH PERFORMANCE COMPUTING.

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Kwok

    2002-08-30

    The T7 Working Group on High Performance Computing (HPC) had more than 30 participants (listed in Section 6). During the three weeks at Snowmass there were about 30 presentations (listed in Section 7). This working group also had joint sessions with a number of other working groups, including E1 (Neutrino Factories and Muon Colliders), M1 (Muon Based Systems), M6 (High Intensity Proton Sources), T4 (Particle Sources), T5 (Beam dynamics), and T8 (Advanced Accelerators). The topics that were discussed fall naturally into three areas: (1) HPC requirements for next-generation accelerator design, (2) state-of-the-art in HPC simulation of accelerator systems, and (3) applied mathematics and computer science activities related to the development of HPC tools that will be of use to the accelerator community (as well as other communities). This document summarizes the material mentioned above and includes recommendations for future HPC activities in the accelerator community. The relationship of those activities to the HENP/SciDAC project on 21st century accelerator simulation is also discussed.

  12. Experimental Evaluation and Workload Characterization for High-Performance Computer Architectures

    Science.gov (United States)

    El-Ghazawi, Tarek A.

    1995-01-01

    This research is conducted in the context of the Joint NSF/NASA Initiative on Evaluation (JNNIE). JNNIE is an inter-agency research program that goes beyond typical.bencbking to provide and in-depth evaluations and understanding of the factors that limit the scalability of high-performance computing systems. Many NSF and NASA centers have participated in the effort. Our research effort was an integral part of implementing JNNIE in the NASA ESS grand challenge applications context. Our research work under this program was composed of three distinct, but related activities. They include the evaluation of NASA ESS high- performance computing testbeds using the wavelet decomposition application; evaluation of NASA ESS testbeds using astrophysical simulation applications; and developing an experimental model for workload characterization for understanding workload requirements. In this report, we provide a summary of findings that covers all three parts, a list of the publications that resulted from this effort, and three appendices with the details of each of the studies using a key publication developed under the respective work.

  13. High-resolution x-ray computed tomography to understand ruminant phylogeny

    Science.gov (United States)

    Costeur, Loic; Schulz, Georg; Müller, Bert

    2014-09-01

    High-resolution X-ray computed tomography has become a vital technique to study fossils down to the true micrometer level. Paleontological research requires the non-destructive analysis of internal structures of fossil specimens. We show how X-ray computed tomography enables us to visualize the inner ear of extinct and extant ruminants without skull destruction. The inner ear, a sensory organ for hearing and balance has a rather complex three-dimensional morphology and thus provides relevant phylogenetical information what has been to date essentially shown in primates. We made visible the inner ears of a set of living and fossil ruminants using the phoenix x-ray nanotom®m (GE Sensing and Inspection Technologies GmbH). Because of the high absorbing objects a tungsten target was used and the experiments were performed with maximum accelerating voltage of 180 kV and a beam current of 30 μA. Possible stem ruminants of the living families are known in the fossil record but extreme morphological convergences in external structures such as teeth is a strong limitation to our understanding of the evolutionary history of this economically important group of animals. We thus investigate the inner ear to assess its phylogenetical potential for ruminants and our first results show strong family-level morphological differences.

  14. Implementing Simulation Design of Experiments and Remote Execution on a High Performance Computing Cluster

    Science.gov (United States)

    2007-09-01

    Kennard, R. W. & Stone, L.A. (1969). Computer Aided Desing of Experiments . Tecnometrics, 11(1), 137-148. Kleijnen, J. P. (2003). A user’s guide to the...SIMULATION DESIGN OF EXPERIMENTS AND REMOTE EXECUTION ON A HIGH PERFORMANCE COMPUTING CLUSTER by Adam J. Peters September 2007 Thesis...Simulation Design of Experiments and Remote Execution on a High Performance Computing Cluster 6. AUTHOR(S) Adam J. Peters 5. FUNDING NUMBERS 7

  15. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  16. High-performance computational solutions in protein bioinformatics

    CERN Document Server

    Mrozek, Dariusz

    2014-01-01

    Recent developments in computer science enable algorithms previously perceived as too time-consuming to now be efficiently used for applications in bioinformatics and life sciences. This work focuses on proteins and their structures, protein structure similarity searching at main representation levels and various techniques that can be used to accelerate similarity searches. Divided into four parts, the first part provides a formal model of 3D protein structures for functional genomics, comparative bioinformatics and molecular modeling. The second part focuses on the use of multithreading for

  17. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  18. High-performance computational condensed-matter physics in the cloud

    Science.gov (United States)

    Rehr, J. J.; Svec, L.; Gardner, J. P.; Prange, M. P.

    2009-03-01

    We demonstrate the feasibility of high performance scientific computation in condensed-matter physics using cloud computers as an alternative to traditional computational tools. The availability of these large, virtualized pools of compute resources raises the possibility of a new compute paradigm for scientific research with many advantages. For research groups, cloud computing provides convenient access to reliable, high performance clusters and storage, without the need to purchase and maintain sophisticated hardware. For developers, virtualization allows scientific codes to be pre-installed on machine images, facilitating control over the computational environment. Detailed tests are presented for the parallelized versions of the electronic structure code SIESTA ootnotetextJ. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002). and for the x-ray spectroscopy code FEFF ootnotetextA. Ankudinov et al., Phys. Rev. B 65, 104107 (2002). including CPU, network, and I/O performance, using the the Amazon EC2 Elastic Cloud.

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  20. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  1. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Fillippi, Anthony [Texas A& M University; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University

    2012-01-01

    For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.

  2. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Filippi, Anthony M [ORNL; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University

    2012-01-01

    Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.

  3. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the

  4. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to…

  5. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure…

  6. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to…

  7. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  8. The Relationship between Utilization of Computer Games and Spatial Abilities among High School Students

    Science.gov (United States)

    Motamedi, Vahid; Yaghoubi, Razeyah Mohagheghyan

    2015-01-01

    This study aimed at investigating the relationship between computer game use and spatial abilities among high school students. The sample consisted of 300 high school male students selected through multi-stage cluster sampling. Data gathering tools consisted of a researcher made questionnaire (to collect information on computer game usage) and the…

  9. Computer Self-Efficacy among Senior High School Teachers in Ghana and the Functionality of Demographic Variables on Their Computer Self-Efficacy

    Science.gov (United States)

    Sarfo, Frederick Kwaku; Amankwah, Francis; Konin, Daniel

    2017-01-01

    The study is aimed at investigating 1) the level of computer self-efficacy among public senior high school (SHS) teachers in Ghana and 2) the functionality of teachers' age, gender, and computer experiences on their computer self-efficacy. Four hundred and Seven (407) SHS teachers were used for the study. The "Computer Self-Efficacy"…

  10. theoretical basis for slurry computation and compounding in highly ...

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... There is a non-linear relationship between cross-sectional area, volume ratio and ... increases compared to an equivalent volume required for an equivalent true vertical depth of a vertical well. ... Under static equilibrium, free water occurrence and .... ter, on occurrence, would tend to contact a large area.

  11. THE FAILURE OF TCP IN HIGH-PERFORMANCE COMPUTATIONAL GRIDS

    Energy Technology Data Exchange (ETDEWEB)

    W. FENG; ET AL

    2000-08-01

    Distributed computational grids depend on TCP to ensure reliable end-to-end communication between nodes across the wide-area network (WAN). Unfortunately, TCP performance can be abysmal even when buffers on the end hosts are manually optimized. Recent studies blame the self-similar nature of aggregate network traffic for TCP's poor performance because such traffic is not readily amenable to statistical multiplexing in the Internet, and hence computational grids. In this paper we identify a source of self-similarity previously ignored, a source that is readily controllable--TCP. Via an experimental study, we examine the effects of the TCP stack on network traffic using different implementations of TCP. We show that even when aggregate application traffic ought to smooth out as more applications' traffic are multiplexed, TCP induces burstiness into the aggregate traffic loud, thus adversely impacting network performance. Furthermore, our results indicate that TCP performance will worsen as WAN speeds continue to increase.

  12. Requirements for accurate estimation of anisotropic material parameters by magnetic resonance elastography: A computational study.

    Science.gov (United States)

    Tweten, D J; Okamoto, R J; Bayly, P V

    2017-01-17

    To establish the essential requirements for characterization of a transversely isotropic material by magnetic resonance elastography (MRE). Three methods for characterizing nearly incompressible, transversely isotropic (ITI) materials were used to analyze data from closed-form expressions for traveling waves, finite-element (FE) simulations of waves in homogeneous ITI material, and FE simulations of waves in heterogeneous material. Key properties are the complex shear modulus μ2 , shear anisotropy ϕ=μ1/μ2-1, and tensile anisotropy ζ=E1/E2-1. Each method provided good estimates of ITI parameters when both slow and fast shear waves with multiple propagation directions were present. No method gave accurate estimates when the displacement field contained only slow shear waves, only fast shear waves, or waves with only a single propagation direction. Methods based on directional filtering are robust to noise and include explicit checks of propagation and polarization. Curl-based methods led to more accurate estimates in low noise conditions. Parameter estimation in heterogeneous materials is challenging for all methods. Multiple shear waves, both slow and fast, with different propagation directions, must be present in the displacement field for accurate parameter estimates in ITI materials. Experimental design and data analysis can ensure that these requirements are met. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. C2DF: High Rate DDOS filtering method in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Pourya Shamsolmoali

    2014-08-01

    Full Text Available Distributed Denial of Service (DDOS attacks have become one of the main threats in cloud environment. A DDOS attack can make large scale of damages to resources and access of the resources to genuine cloud users. Old-established defending system cannot be easily applied in cloud computing due to their relatively low competence and wide storage. In this paper we offered a data mining and neural network technique, trained to detect and filter DDOS attacks. For the simulation experiments we used KDD Cup dataset and our lab datasets. Our proposed model requires small storage and ability of fast detection. The obtained results indicate that our model has the ability to detect and filter most type of TCP attacks. Detection accuracy was the metric used to evaluate the performance of our proposed model. From the simulation results, it is visible that our algorithms achieve high detection accuracy (97% with fewer false alarms.

  14. Opendda: a Novel High-Performance Computational Framework for the Discrete Dipole Approximation

    CERN Document Server

    Donald, James Mc; Jennings, S Gerard

    2009-01-01

    This work presents a highly optimized computational framework for the Discrete Dipole Approximation, a numerical method for calculating the optical properties associated with a target of arbitrary geometry that is widely used in atmospheric, astrophysical and industrial simulations. Core optimizations include the bit-fielding of integer data and iterative methods that complement a new Discrete Fourier Transform (DFT) kernel, which efficiently calculates the matrix vector products required by these iterative solution schemes. The new kernel performs the requisite 3-D DFTs as ensembles of 1-D transforms, and by doing so, is able to reduce the number of constituent 1-D transforms by 60% and the memory by over 80%. The optimizations also facilitate the use of parallel techniques to further enhance the performance. Complete OpenMP-based shared-memory and MPI-based distributed-memory implementations have been created to take full advantage of the various architectures. Several benchmarks of the new framework indica...

  15. Computational approaches and metrics required for formulating biologically realistic nanomaterial pharmacokinetic models

    Science.gov (United States)

    Riviere, Jim E.; Scoglio, Caterina; Sahneh, Faryad D.; Monteiro-Riviere, Nancy A.

    2013-01-01

    The field of nanomaterial pharmacokinetics is in its infancy, with major advances largely restricted by a lack of biologically relevant metrics, fundamental differences between particles and small molecules of organic chemicals and drugs relative to biological processes involved in disposition, a scarcity of sufficiently rich and characterized in vivo data and a lack of computational approaches to integrating nanomaterial properties to biological endpoints. A central concept that links nanomaterial properties to biological disposition, in addition to their colloidal properties, is the tendency to form a biocorona which modulates biological interactions including cellular uptake and biodistribution. Pharmacokinetic models must take this crucial process into consideration to accurately predict in vivo disposition, especially when extrapolating from laboratory animals to humans since allometric principles may not be applicable. The dynamics of corona formation, which modulates biological interactions including cellular uptake and biodistribution, is thereby a crucial process involved in the rate and extent of biodisposition. The challenge will be to develop a quantitative metric that characterizes a nanoparticle's surface adsorption forces that are important for predicting biocorona dynamics. These types of integrative quantitative approaches discussed in this paper for the dynamics of corona formation must be developed before realistic engineered nanomaterial risk assessment can be accomplished.

  16. XSTREAM: A Highly Efficient High Speed Real-time Satellite Data Acquisition and Processing System using Heterogeneous Computing

    Science.gov (United States)

    Pramod Kumar, K.; Mahendra, P.; Ramakrishna rReddy, V.; Tirupathi, T.; Akilan, A.; Usha Devi, R.; Anuradha, R.; Ravi, N.; Solanki, S. S.; Achary, K. K.; Satish, A. L.; Anshu, C.

    2014-11-01

    In the last decade, the remote sensing community has observed a significant growth in number of satellites, sensors and their resolutions, thereby increasing the volume of data to be processed each day. Satellite data processing is a complex and time consuming activity. It consists of various tasks, such as decode, decrypt, decompress, radiometric normalization, stagger corrections, ephemeris data processing for geometric corrections etc., and finally writing of the product in the form of an image file. Each task in the processing chain is sequential in nature and has different computing needs. Conventionally the processes are cascaded in a well organized workflow to produce the data products, which are executed on general purpose high-end servers / workstations in an offline mode. Hence, these systems are considered to be ineffective for real-time applications that require quick response and just-intime decision making such as disaster management, home land security and so on. This paper discusses anovel approach to processthe data online (as the data is being acquired) using a heterogeneous computing platform namely XSTREAM which has COTS hardware of CPUs, GPUs and FPGA. This paper focuses on the process architecture, re-engineering aspects and mapping of tasks to the right computing devicewithin the XSTREAM system, which makes it an ideal cost-effective platform for acquiring, processing satellite payload data in real-time and displaying the products in original resolution for quick response. The system has been tested for IRS CARTOSAT and RESOURCESAT series of satellites which have maximum data downlink speed of 210 Mbps.

  17. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    Science.gov (United States)

    Aragón, Alejandro M.

    2014-11-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU. Catalogue identifier: AESA_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESA_v1_1.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 12 376 No. of bytes in distributed program, including test data, etc.: 81 669 Distribution format: tar.gz Programming language: C++. Computer: All modern architectures. Operating system: Linux/Unix/Mac OS. RAM: Problem dependent Classification: 5. External routines: GNU CMake build system and BLAS implementation. NVIDIA CUBLAS for GPU computing. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: AESA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 185 (2014) 1681 Nature of problem: Tensors are a basic building block for any program in scientific computing. Yet, tensors are not a built-in component of the C++ programming language. Solution method: An arbitrary-rank tensor class template is crafted by using the new features introduced by the C++11 set of requirements. In addition, an entire expression template facility is built on top, to provide mathematical

  18. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    Science.gov (United States)

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  19. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Science.gov (United States)

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  20. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Directory of Open Access Journals (Sweden)

    David K Brown

    Full Text Available Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS, a workflow management system and web interface for high performance computing (HPC. JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.