WorldWideScience

Sample records for high consequence computer

  1. Ensuring critical event sequences in high consequence computer based systems as inspired by path expressions

    Energy Technology Data Exchange (ETDEWEB)

    Kidd, M.E.C.

    1997-02-01

    The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.

  2. The consequences of quantum computing

    OpenAIRE

    Malenko, Kokan

    2017-01-01

    Quantum computing is a new promising field that might bring great improvements to present day technology. But it might also break some currently used cryptography algorithms. Usable and stable quantum computers do not exist yet, but their potential power and usefulness has spurred a great interest. In this work, we explain the basic properties of a quantum computer, which uses the following quantum properties: superposition, interference and entanglement. We talk about qubits,...

  3. An examination of the consequences in high consequence operations

    Energy Technology Data Exchange (ETDEWEB)

    Spray, S.D.; Cooper, J.A.

    1996-06-01

    Traditional definitions of risk partition concern into the probability of occurrence and the consequence of the event. Most safety analyses focus on probabilistic assessment of an occurrence and the amount of some measurable result of the event, but the real meaning of the ``consequence`` partition is usually afforded less attention. In particular, acceptable social consequence (consequence accepted by the public) frequently differs significantly from the metrics commonly proposed by risk analysts. This paper addresses some of the important system development issues associated with consequences, focusing on ``high consequence operations safety.``

  4. Consequence Prioritization Process for Potential High Consequence Events (HCE)

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Sarah G. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-10-31

    This document describes the process for Consequence Prioritization, the first phase of the Consequence-Driven Cyber-Informed Engineering (CCE) framework. The primary goal of Consequence Prioritization is to identify potential disruptive events that would significantly inhibit an organization’s ability to provide the critical services and functions deemed fundamental to their business mission. These disruptive events, defined as High Consequence Events (HCE), include both events that have occurred or could be realized through an attack of critical infrastructure owner assets. While other efforts have been initiated to identify and mitigate disruptive events at the national security level, such as Presidential Policy Directive 41 (PPD-41), this process is intended to be used by individual organizations to evaluate events that fall below the threshold for a national security. Described another way, Consequence Prioritization considers threats greater than those addressable by standard cyber-hygiene and includes the consideration of events that go beyond a traditional continuity of operations (COOP) perspective. Finally, Consequence Prioritization is most successful when organizations adopt a multi-disciplinary approach, engaging both cyber security and engineering expertise, as in-depth engineering perspectives are required to recognize and characterize and mitigate HCEs. Figure 1 provides a high-level overview of the prioritization process.

  5. Low-Incidence, High-Consequence Pathogens

    Centers for Disease Control (CDC) Podcasts

    2014-02-21

    Dr. Stephan Monroe, a deputy director at CDC, discusses the impact of low-incidence, high-consequence pathogens globally.  Created: 2/21/2014 by National Center for Emerging and Zoonotic Infectious Diseases (NCEZID).   Date Released: 2/26/2014.

  6. Assuring quality in high-consequence engineering

    Energy Technology Data Exchange (ETDEWEB)

    Hoover, Marcey L.; Kolb, Rachel R.

    2014-03-01

    In high-consequence engineering organizations, such as Sandia, quality assurance may be heavily dependent on staff competency. Competency-dependent quality assurance models are at risk when the environment changes, as it has with increasing attrition rates, budget and schedule cuts, and competing program priorities. Risks in Sandia's competency-dependent culture can be mitigated through changes to hiring, training, and customer engagement approaches to manage people, partners, and products. Sandia's technical quality engineering organization has been able to mitigate corporate-level risks by driving changes that benefit all departments, and in doing so has assured Sandia's commitment to excellence in high-consequence engineering and national service.

  7. Achieving strategic surety for high consequence software

    Energy Technology Data Exchange (ETDEWEB)

    Pollock, G.M.

    1996-09-01

    A strategic surety roadmap for high consequence software systems under the High Integrity Software (HIS) Program at Sandia National Laboratories guides research in identifying methodologies to improve software surety. Selected research tracks within this roadmap are identified and described detailing current technology and outlining advancements to be pursued over the coming decade to reach HIS goals. The tracks discussed herein focus on Correctness by Design, and System Immunology{trademark}. Specific projects are discussed with greater detail given on projects involving Correct Specification via Visualization, Synthesis, & Analysis; Visualization of Abstract Objects; and Correct Implementation of Components.

  8. Economic consequences of high throughput maskless lithography

    Science.gov (United States)

    Hartley, John G.; Govindaraju, Lakshmi

    2005-11-01

    Many people in the semiconductor industry bemoan the high costs of masks and view mask cost as one of the significant barriers to bringing new chip designs to market. All that is needed is a viable maskless technology and the problem will go away. Numerous sites around the world are working on maskless lithography but inevitably, the question asked is "Wouldn't a one wafer per hour maskless tool make a really good mask writer?" Of course, the answer is yes, the hesitation you hear in the answer isn't based on technology concerns, it's financial. The industry needs maskless lithography because mask costs are too high. Mask costs are too high because mask pattern generators (PG's) are slow and expensive. If mask PG's become much faster, mask costs go down, the maskless market goes away and the PG supplier is faced with an even smaller tool demand from the mask shops. Technical success becomes financial suicide - or does it? In this paper we will present the results of a model that examines some of the consequences of introducing high throughput maskless pattern generation. Specific features in the model include tool throughput for masks and wafers, market segmentation by node for masks and wafers and mask cost as an entry barrier to new chip designs. How does the availability of low cost masks and maskless tools affect the industries tool makeup and what is the ultimate potential market for high throughput maskless pattern generators?

  9. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  10. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  11. High assurance services computing

    CERN Document Server

    2009-01-01

    Covers service-oriented technologies in different domains including high assurance systemsAssists software engineers from industry and government laboratories who develop mission-critical software, and simultaneously provides academia with a practitioner's outlook on the problems of high-assurance software development

  12. Mathematical Modelling of Some Consequences of Hurricanes: The Proposal of Research Project, Mathematical and Computational Methods

    OpenAIRE

    J. Nedoma

    2012-01-01

    The chapter "Mathematical Modelling of Some Consequences of Hurricanes: The Proposal of Research Project, Mathematical and Computational Methods. " is the part of the book "Eddies and Hurricanes: Formation, Triggers and Impact. Hauppauge : Nova Science Publishers, 2012 - (Tarasov, A.; Demidov, M.)" and it deals with mechanisms of hurricanes and their consequences. The proposal of research project and then mathematical models concerning some consequences of hurricanes are presented and discuss...

  13. The High Performance Computing Initiative

    Science.gov (United States)

    Holcomb, Lee B.; Smith, Paul H.; Macdonald, Michael J.

    1991-01-01

    The paper discusses NASA High Performance Computing Initiative (HPCI), an essential component of the Federal High Performance Computing Program. The HPCI program is designed to provide a thousandfold increase in computing performance, and apply the technologies to NASA 'Grand Challenges'. The Grand Challenges chosen include integrated multidisciplinary simulations and design optimizations of aerospace vehicles throughout the mission profiles; the multidisciplinary modeling and data analysis of the earth and space science physical phenomena; and the spaceborne control of automated systems, handling, and analysis of sensor data and real-time response to sensor stimuli.

  14. Consequences and Limitations of Conventional Computers and their Solutions through Quantum Computers

    OpenAIRE

    Nilesh BARDE; Thakur, Deepak; Pranav BARDAPURKAR; Sanjaykumar DALVI

    2012-01-01

    Quantum computer is the current topic of research in the field of computational science, which uses principles of quantum mechanics. Quantum computers will be much more powerful than the classical computer due to its enormous computational speed. Recent developments in quantum computers which are based on the laws of quantum mechanics, shows different ways of performing efficient calculations along with the various results which are not possible on the classical computers in an efficient peri...

  15. Montana's High School Dropouts: Examining the Fiscal Consequences. State Research

    Science.gov (United States)

    Stuit, David A.; Springer, Jeffrey A.

    2010-01-01

    This report analyzes the economic and social costs of the high school dropout problem in Montana from the perspective of a state taxpayer. The majority of the authors' analysis considers the consequences of this problem in terms of labor market, tax revenue, and public service costs. In quantifying these costs, the authors seek to inform public…

  16. California's High School Dropouts: Examining the Fiscal Consequences

    Science.gov (United States)

    Stuit, David A.; Springer, Jeffrey A.

    2010-01-01

    This report analyzes the economic and social costs of the high school dropout problem in California from the perspective of a state taxpayer. The authors' analysis considers the consequences of this problem in terms of labor market, tax revenue, public health, and incarceration costs. The authors' quantification of these costs reveals the sizeable…

  17. Consequences and Limitations of Conventional Computers and their Solutions through Quantum Computers

    Directory of Open Access Journals (Sweden)

    Nilesh BARDE

    2012-08-01

    Full Text Available Quantum computer is the current topic of research in the field of computational science, which uses principles of quantum mechanics. Quantum computers will be much more powerful than the classical computer due to its enormous computational speed. Recent developments in quantum computers which are based on the laws of quantum mechanics, shows different ways of performing efficient calculations along with the various results which are not possible on the classical computers in an efficient period of time. One of the most striking results that have obtained on the quantum computers is the prime factorization of the large integer in a polynomial time. The idea of involvement of the quantum mechanics for the computational purpose is outlined briefly in the present work that reflects the importance and advantages of the next generation of the 21st century classical computers, named as quantum computers, in terms of the cost as well as time period required for the computation purpose. Present paper presents a quantum computer simulator for executing the limitations of classical computer with respect to time and the number of digits of a composite integer used for calculating its prime factors.

  18. High Performance Computing at NASA

    Science.gov (United States)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  19. DOE research in utilization of high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  20. Proceedings of the High Consequence Operations Safety Symposium

    Energy Technology Data Exchange (ETDEWEB)

    1994-12-01

    Many organizations face high consequence safety situations where unwanted stimuli due to accidents, catastrophes, or inadvertent human actions can cause disasters. In order to improve interaction among such organizations and to build on each others` experience, preventive approaches, and assessment techniques, the High Consequence Operations Safety Symposium was held July 12--14, 1994 at Sandia National Laboratories, Albuquerque, New Mexico. The symposium was conceived by Dick Schwoebel, Director of the SNL Surety Assessment Center. Stan Spray, Manager of the SNL System Studies Department, planned strategy and made many of the decisions necessary to bring the concept to fruition on a short time scale. Angela Campos and about 60 people worked on the nearly limitless implementation and administrative details. The initial symposium (future symposia are planned) was structured around 21 plenary presentations in five methodology-oriented sessions, along with a welcome address, a keynote address, and a banquet address. Poster papers addressing the individual session themes were available before and after the plenary sessions and during breaks.

  1. Physiological consequences of military high-speed boat transits.

    Science.gov (United States)

    Myers, Stephen D; Dobbins, Trevor D; King, Stuart; Hall, Benjamin; Ayling, Ruth M; Holmes, Sharon R; Gunston, Tom; Dyson, Rosemary

    2011-09-01

    The purpose of this study was to investigate the consequences of a high-speed boat transit on physical performance. Twenty-four Royal Marines were randomly assigned to a control (CON) or transit (TRAN) group. The CON group sat onshore for 3 h whilst the TRAN group completed a 3-h transit in open-boats running side-by-side, at 40 knots in moderate-to-rough seas, with boat deck and seat-pan acceleration recorded. Performance tests (exhaustive shuttle-run, handgrip, vertical-jump, push-up) were completed pre- and immediately post-transit/sit, with peak heart rate (HRpeak) and rating of perceived exertion (RPE) recorded. Serial blood samples (pre, 24, 36, 48, 72 h) were analyzed for creatine kinase (CK) activity. The transit was typified by frequent high shock impacts, but moderate mean heart rates (boat transits.

  2. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  3. Integration of human reliability analysis into the high consequence process

    Energy Technology Data Exchange (ETDEWEB)

    Houghton, F.K.; Morzinski, J.

    1998-12-01

    When performing a hazards analysis (HA) for a high consequence process, human error often plays a significant role in the hazards analysis. In order to integrate human error into the hazards analysis, a human reliability analysis (HRA) is performed. Human reliability is the probability that a person will correctly perform a system-required activity in a required time period and will perform no extraneous activity that will affect the correct performance. Even though human error is a very complex subject that can only approximately be addressed in risk assessment, an attempt must be made to estimate the effect of human errors. The HRA provides data that can be incorporated in the hazard analysis event. This paper will discuss the integration of HRA into a HA for the disassembly of a high explosive component. The process was designed to use a retaining fixture to hold the high explosive in place during a rotation of the component. This tool was designed as a redundant safety feature to help prevent a drop of the explosive. This paper will use the retaining fixture to demonstrate the following HRA methodology`s phases. The first phase is to perform a task analysis. The second phase is the identification of the potential human, both cognitive and psychomotor, functions performed by the worker. During the last phase the human errors are quantified. In reality, the HRA process is an iterative process in which the stages overlap and information gathered in one stage may be used to refine a previous stage. The rationale for the decision to use or not use the retaining fixture and the role the HRA played in the decision will be discussed.

  4. High performance computing and communications program

    Science.gov (United States)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  5. High performance computing at Sandia National Labs

    Energy Technology Data Exchange (ETDEWEB)

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  6. Computing support for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Avery, P.; Yelton, J. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  7. Vecpar'08: High Performance Computing for Computational Science

    OpenAIRE

    Invernizzi, Alice

    2008-01-01

    The 8th International Meeting of High Perfomance Computing for Computational Science was held between 24 and 27 June 2008 in Toulouse at the ENSEEIHT Ecole Nationale Superiéure d’Electrotechnique, d’Electronique, d’Informatique, d’Hydraulique et des Téléccomunications of the University of Toulouse. The conference was the opportunity for gathering of an enlarged scientific community made up of mathematicians, physicists, engineers reverting to computer simulations for analysis of complex syst...

  8. Statistical surrogate models for prediction of high-consequence climate change.

    Energy Technology Data Exchange (ETDEWEB)

    Constantine, Paul; Field, Richard V., Jr.; Boslough, Mark Bruce Elrick

    2011-09-01

    In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the low-probability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on central tendencies. We frame the climate change problem and its associated risks in a similar manner. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We therefore propose the use of specialized statistical surrogate models (SSMs) for the purpose of exploring the probability law of various climate variables of interest. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field. The SSM can be calibrated to available spatial and temporal data from existing climate databases, e.g., the Program for Climate Model Diagnosis and Intercomparison (PCMDI), or to a collection of outputs from a General Circulation Model (GCM), e.g., the Community Earth System Model (CESM) and its predecessors. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework is developed to provide quantitative measures of confidence, via Bayesian credible intervals, in the use of the proposed approach to assess these risks.

  9. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  10. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  11. Performance tuning for high performance computing systems

    OpenAIRE

    Pahuja, Himanshu

    2017-01-01

    A Distributed System is composed by integration between loosely coupled software components and the underlying hardware resources that can be distributed over the standard internet framework. High Performance Computing used to involve utilization of supercomputers which could churn a lot of computing power to process massively complex computational tasks, but is now evolving across distributed systems, thereby having the ability to utilize geographically distributed computing resources. We...

  12. Platform for High-Assurance Cloud Computing

    Science.gov (United States)

    2016-06-01

    PLATFORM FOR HIGH-ASSURANCE CLOUD COMPUTING CORNELL UNIVERSITY JUNE 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC...DATES COVERED (From - To) SEP 2011 – DEC 2015 4. TITLE AND SUBTITLE PLATFORM FOR HIGH-ASSURANCE CLOUD COMPUTING 5a. CONTRACT NUMBER 5b. GRANT NUMBER...09. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Cornell’s MRC effort undertook to help the military leverage cloud computing (elasticity, better data

  13. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  15. Tennessee's High School Dropouts: Examining the Fiscal Consequences

    Science.gov (United States)

    D'Andrea, Christian

    2010-01-01

    High school dropouts adversely impact the state of Tennessee each year--financially and socially. Dropouts' lower incomes, high unemployment rates, increased need for medical care, and higher propensity for incarceration create a virtual vortex that consumes Tennesseans' tax dollars at a vicious rate. Hundreds of millions of dollars are spent on…

  16. Prevalence and consequences of substance use among high school ...

    African Journals Online (AJOL)

    This paper is an overview of mind-altering substance use among high school and college students in Ethiopia in the past two decades. Alcohol, khat and cigarettes were commonly used by both high school and college students in urban as well as rural areas. While the use patterns of the substances were related to the ...

  17. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. High-performance computing reveals missing genes

    OpenAIRE

    Whyte, Barry James

    2010-01-01

    Scientists at the Virginia Bioinformatics Institute and the Department of Computer Science at Virginia Tech have used high-performance computing to locate small genes that have been missed by scientists in their quest to define the microbial DNA sequences of life.

  19. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  20. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  1. High School Dropouts: Causes, Consequences, and Cure. Fastback 242.

    Science.gov (United States)

    Grossnickle, Donald R.

    This booklet addresses the issue of high school dropouts. The dropout problem is briefly reviewed and dropout statistics are presented. A section on identifying the dropout lists early warning signs of potential dropouts and examines reasons for dropping out. Seven profiles of dropouts are included which provide personal insights, describe…

  2. Limiting the Unintended Consequences of High-Stakes Testing.

    Directory of Open Access Journals (Sweden)

    Stuart S. Yeh

    2005-10-01

    Full Text Available Interviews with 61 teachers and administrators in four Minnesota school districts suggest that, in their judgment, Minnesota's state-mandated tests were well-aligned with curricular priorities and teachers' instructional goals, emphasizing critical thinking as well as competencies needed to pass the Basic Standards exit exam, and avoiding the type of recall item that would require drill and memorization. This result, i n combination with a survey showing that 85 percent of Minnesota teachers support the exit exam, suggests that Minnesota has been unusually successful in designing a high stakes testing system that has garnered teacher support. The success of Minnesota's model suggests that unintended narrowing of the curriculum due to high stakes testing may be avoided if pressure on teachers to narrow the curriculum is reduced through well-designed, well-aligned exams.

  3. High plasma uric acid concentration: causes and consequences

    Directory of Open Access Journals (Sweden)

    de Oliveira Erick

    2012-04-01

    Full Text Available Abstract High plasma uric acid (UA is a precipitating factor for gout and renal calculi as well as a strong risk factor for Metabolic Syndrome and cardiovascular disease. The main causes for higher plasma UA are either lower excretion, higher synthesis or both. Higher waist circumference and the BMI are associated with higher insulin resistance and leptin production, and both reduce uric acid excretion. The synthesis of fatty acids (tryglicerides in the liver is associated with the de novo synthesis of purine, accelerating UA production. The role played by diet on hyperuricemia has not yet been fully clarified, but high intake of fructose-rich industrialized food and high alcohol intake (particularly beer seem to influence uricemia. It is not known whether UA would be a causal factor or an antioxidant protective response. Most authors do not consider the UA as a risk factor, but presenting antioxidant function. UA contributes to > 50% of the antioxidant capacity of the blood. There is still no consensus if UA is a protective or a risk factor, however, it seems that acute elevation is a protective factor, whereas chronic elevation a risk for disease.

  4. Consequences of high-frequency operation on EUV source efficiency

    Science.gov (United States)

    Sizyuk, Tatyana

    2017-08-01

    A potential problem of future extreme ultraviolet (EUV) sources, required for high volume manufacture regimes, can be related to the contamination of the chamber environment by products of preceding laser pulse/droplet interactions. Implementation of high, 100 kHz and higher, repetition rate of EUV sources using Sn droplets ignited with laser pulses can cause high accumulation of tin in the chamber in the form of vapor, fine mist, or fragmented clusters. In this work, the effects of the residual tin accumulation in the EUV chamber in dependence on laser parameters and mitigation system efficiency were studied. The effect of various pressures of tin vapor on the CO2 and Nd:YAG laser beam propagation and on the size, the intensity, and the resulting efficiency of the EUV sources was analyzed. The HEIGHTS 3D package was used for this analysis to study the effect of residual background pressure and spatial distribution on EUV photon emission and collection. It was found that background pressure in the range of 1-5 Pa does not significantly influence the EUV source produced by CO2 lasers. A larger volume with this pressure condition, however, can reduce the efficiency of the source. However, an optimized volume of mix with proper density could increase the efficiency of the sources produced by CO2 lasers.

  5. A probabilistic consequence assessment for a very high temperature reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joeun; Kim, Jintae; Jae, Moosung [Hanyang Univ., Seoul (Korea, Republic of). Dept. of Nuclear Engineering

    2017-02-15

    Currently, fossil fuel is globally running out. If current trends continue, crude oil will be depleted in 20 years and natural gas in 40 years. In addition, the use of fossil resource has increased emissions of green gas such as carbon dioxide. Therefore, there has been a strong demand in recent years for producing large amounts of hydrogen as an alternative energy [1]. To generate hydrogen energy, very high temperature more than 900 C is required but this level is not easy to reach. Because a Very High Temperature Reactor (VHTR), one of next generation reactor, is able to make the temperature, it is regarded as a solution of the problem. Also, VHTR has an excellent safety in comparison with existing and other next generation reactors. Especially, a passive system, Reactor Cavity Cooling System (RCCS), is adopted to get rid of radiant heat in case of accidents. To achieve variety requirements of new designed-reactors, however, it needs to develop new methodologies and definitions different with existing method. At the same time, an application of probability safety assessment (PSA) has been proposed to ensure the safety of next generation NPPs. For this, risk-informed designs of structures have to be developed and verified. Particularly, the passive system requires to be evaluated for its reliability. The objective of this study is to improve safety of VIITR by conducting risk profile.

  6. Excessive computer game playing among Norwegian adults: self-reported consequences of playing and association with mental health problems.

    Science.gov (United States)

    Wenzel, H G; Bakken, I J; Johansson, A; Götestam, K G; Øren, Anita

    2009-12-01

    Computer games are the most advanced form of gaming. For most people, the playing is an uncomplicated leisure activity; however, for a minority the gaming becomes excessive and is associated with negative consequences. The aim of the present study was to investigate computer game-playing behaviour in the general adult Norwegian population, and to explore mental health problems and self-reported consequences of playing. The survey includes 3,405 adults 16 to 74 years old (Norway 2007, response rate 35.3%). Overall, 65.5% of the respondents reported having ever played computer games (16-29 years, 93.9%; 30-39 years, 85.0%; 40-59 years, 56.2%; 60-74 years, 25.7%). Among 2,170 players, 89.8% reported playing less than 1 hr. as a daily average over the last month, 5.0% played 1-2 hr. daily, 3.1% played 2-4 hr. daily, and 2.2% reported playing > 4 hr. daily. The strongest risk factor for playing > 4 hr. daily was being an online player, followed by male gender, and single marital status. Reported negative consequences of computer game playing increased strongly with average daily playing time. Furthermore, prevalence of self-reported sleeping problems, depression, suicide ideations, anxiety, obsessions/ compulsions, and alcohol/substance abuse increased with increasing playing time. This study showed that adult populations should also be included in research on computer game-playing behaviour and its consequences.

  7. Financial system loss as an example of high consequence, high frequency events

    Energy Technology Data Exchange (ETDEWEB)

    McGovern, D.E.

    1996-07-01

    Much work has been devoted to high consequence events with low frequency of occurrence. Characteristic of these events are bridge failure (such as that of the Tacoma Narrows), building failure (such as the collapse of a walkway at a Kansas City hotel), or compromise of a major chemical containment system (such as at Bhopal, India). Such events, although rare, have an extreme personal, societal, and financial impact. An interesting variation is demonstrated by financial losses due to fraud and abuse in the money management system. The impact can be huge, entailing very high aggregate costs, but these are a result of the contribution of many small attacks and not the result of a single (or few) massive events. Public awareness is raised through publicized events such as the junk bond fraud perpetrated by Milikin or gross mismanagement in the failure of the Barings Bank through unsupervised trading activities by Leeson in Singapore. These event,s although seemingly large (financial losses may be on the order of several billion dollars), are but small contributors to the estimated $114 billion loss to all types of financial fraud in 1993. This paper explores the magnitude of financial system losses and identifies new areas for analysis of high consequence events including the potential effect of malevolent intent.

  8. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  9. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  10. Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    1983-02-01

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  11. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  12. Identifying controlling variables for math computation fluency through experimental analysis: the interaction of stimulus control and reinforcing consequences.

    Science.gov (United States)

    Hofstadter-Duke, Kristi L; Daly, Edward J

    2015-03-01

    This study investigated a method for conducting experimental analyses of academic responding. In the experimental analyses, academic responding (math computation), rather than problem behavior, was reinforced across conditions. Two separate experimental analyses (one with fluent math computation problems and one with non-fluent math computation problems) were conducted with three elementary school children using identical contingencies while math computation rate was measured. Results indicate that the experimental analysis with non-fluent problems produced undifferentiated responding across participants; however, differentiated responding was achieved for all participants in the experimental analysis with fluent problems. A subsequent comparison of the single-most effective condition from the experimental analyses replicated the findings with novel computation problems. Results are discussed in terms of the critical role of stimulus control in identifying controlling consequences for academic deficits, and recommendations for future research refining and extending experimental analysis to academic responding are made. © The Author(s) 2014.

  13. Grid Computing in High Energy Physics

    Science.gov (United States)

    Avery, Paul

    2004-09-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public). It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  14. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  15. High Performance Computing and Communications Panel Report.

    Science.gov (United States)

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  16. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  17. Department of Energy research in utilization of high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  18. High-Precision Computation and Mathematical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  19. High Functioning Autism Spectrum Disorders in Adults: Consequences for Primary Caregivers Compared to Schizophrenia and Depression.

    Science.gov (United States)

    Grootscholten, Inge A C; van Wijngaarden, Bob; Kan, Cornelis C

    2018-01-08

    Primary caregivers experience consequences from being in close contact to a person with autism spectrum disorder (ASD). This study used the Involvement Evaluation Questionnaire to explore the level of consequences of 104 caregivers involved with adults with High Functioning ASD (HF-ASD) and compared these with the consequences reported by caregivers of patients suffering from depression and schizophrenia. Caregivers involved with adults with an HF-ASD experience overall consequences comparable to those involved with patients with depression or schizophrenia. Worrying was the most reported consequence. More tension was experienced by the caregivers of ASD patients, especially by spouses. More care and attention for spouses of adults with an HF-ASD appears to be needed.

  20. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  1. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  2. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  3. A Primer on High-Throughput Computing for Genomic Selection

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  4. A History of High-Performance Computing

    Science.gov (United States)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  5. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  6. Routine Bone Marrow Biopsy Has Little or No Therapeutic Consequence for Positron Emission Tomography/Computed Tomography-Staged Treatment-Naive Patients With Hodgkin Lymphoma

    DEFF Research Database (Denmark)

    El-Galaly, Tarec Christoffer; d´Amore, Francesco; Juul Mylam, Karen

    2012-01-01

    Routine Bone Marrow Biopsy Has Little or No Therapeutic Consequence for Positron Emission Tomography/Computed Tomography-Staged Treatment-Naive Patients With Hodgkin Lymphoma......Routine Bone Marrow Biopsy Has Little or No Therapeutic Consequence for Positron Emission Tomography/Computed Tomography-Staged Treatment-Naive Patients With Hodgkin Lymphoma...

  7. THE IMPORTANCE OF AFFECT TO BUILD CONSUMER TRUST IN HIGH-CONSEQUENCES EXCHANGES

    Directory of Open Access Journals (Sweden)

    Mellina da Silva Terres

    2012-12-01

    Full Text Available The present article investigates the importance of affect displayed by service provider to build consumer trust in high consequence exchanges. High-consequence exchanges are difficult situations in which the choices present a dilemma that can cause stress and severe emotional reactions (KAHN; LUCE, 2003. In this specific case, trust based on affect seems to become important; mainly because consumers may not have ability to evaluate the cognitive aspects of the situation, and moreover, a medical services failure can be highly problematic or even fatal (LEISEN; HYMAN, 2004. On the other hand, in low-consequence choices, we are predicting that cognition will be more important than affect in building trust. In this kind of situation, patients are more self-confident, less sensitive, and don’t perceive a high probability of loss (KUNREUTHER et al., 2002, and therefore focuses more on the rational outcomes.

  8. Awareness of Consequence of High School Students on Loss of Bio-Diversity

    Science.gov (United States)

    Kasot, Nazim; Özbas, Serap

    2015-01-01

    The aim of this study is to assess the egoistic, altruistic and biospheric awareness of the consequence of high school students regarding the loss of bio-diversity, then comparing the results on the basis of some independent variables (gender, class and family income). The research data were collected from 884 ninth and tenth grade high school…

  9. Computational identification of micro-structural variations and their proteogenomic consequences in cancer.

    Science.gov (United States)

    Lin, Yen-Yi; Gawronski, Alexander; Hach, Faraz; Li, Sujun; Numanagic, Ibrahim; Sarrafi, Iman; Mishra, Swati; McPherson, Andrew; Collins, Colin; Radovich, Milan; Tang, Haixu; Sahinalp, S Cenk

    2017-12-18

    Rapid advancement in high throughput genome and transcriptome sequencing (HTS) and mass spectrometry (MS) technologies has enabled the acquisition of the genomic, transcriptomic and proteomic data from the same tissue sample. We introduce a computational framework, MiStrVar, to integratively analyze all three types of omics data for a complete molecular profile of a tissue sample. Our framework features MiStrVar, a novel algorithmic method to identify micro structural variants (microSVs) on genomic HTS data. Coupled with deFuse, a popular gene fusion detection method we developed earlier, MiStrVar can accurately profile structurally aberrant transcripts in tumors. Given the breakpoints obtained by MiStrVar and deFuse, our framework can then identify all relevant peptides that span the breakpoint junctions and match them with unique proteomic signatures. Observing structural aberrations in all three types of omics data validates their presence in the tumor samples. We have applied our framework to all The Cancer Genome Atlas (TCGA) breast cancer Whole Genome Sequencing (WGS) and/or RNA-Seq data sets, spanning all four major subtypes, for which proteomics data from Clinical Proteomic Tumor Analysis Consortium (CPTAC) have been released. A recent study on this dataset focusing on SNVs has reported many that lead to novel peptides (Mertins et al., 2016). Complementing and significantly broadening this study, we detected 244 novel peptides from 432 candidate genomic or transcriptomic sequence aberrations. Many of the fusions and microSVs we discovered have not been reported in the literature. Interestingly, the vast majority of these translated aberrations, fusions in particular, were private, demonstrating the extensive inter-genomic heterogeneity present in breast cancer. Many of these aberrations also have matching out-of-frame downstream peptides, potentially indicating novel protein sequence and structure. MiStrVar is available for download at https

  10. A review of research and methods for producing high-consequence software

    Energy Technology Data Exchange (ETDEWEB)

    Collins, E.; Dalton, L.; Peercy, D.; Pollock, G.; Sicking, C.

    1994-12-31

    The development of software for use in high-consequence systems mandates rigorous (formal) processes, methods, and techniques to improve the safety characteristics of those systems. This paper provides a brief overview of current research and practices in high-consequence software, including applied design methods. Some of the practices that are discussed include: fault tree analysis, failure mode effects analysis, petri nets, both hardware and software interlocks, n-version programming, Independent Vulnerability Analyses, and watchdogs. Techniques that offer improvement in the dependability of software in high-consequence systems applications are identified and discussed. Limitations of these techniques are also explored. Research in formal methods, the cleanroom process, and reliability models are reviewed. In addition, current work by several leading researchers as well as approaches being used by leading practitioners are examined.

  11. PROPOSAL FOR THE CREATION OF SECURITY PLANS FOR THE ROAD TRANSPORTATION OF HIGH CONSEQUENCE DANGEROUS GOODS

    Directory of Open Access Journals (Sweden)

    Karolina KOŁDYS

    2016-12-01

    Full Text Available In the list of dangerous goods, there are materials and articles, which, due to particular criteria stated in the European Agreement Concerning the International Carriage of Goods by Road (ADR, are treated as high consequence goods. High consequence dangerous goods are those whose misuse may lead to a terrorist event and therefore pose a serious threat of mass casualties, destruction or socio-economic disruption. All personnel responsible for the carriage of high consequence dangerous goods should comply with ADR requirements, as appropriate. Basic ADR requirements, which should lead to the elimination of potential safety violations hazards, is to acknowledge, implement and respect security plans. The ADR reflects overall security plan regulations, describing the elements of which such plans should be created. The ADR itself does not prescribe methods for preparing documentation, nor implementation details. This article is to elaborate on these aspects.

  12. An Evaluation of the Conditions, Processes, and Consequences of Laptop Computing in K-12 Classrooms

    Science.gov (United States)

    Cavanaugh, Cathy; Dawson, Kara; Ritzhaupt, Albert

    2011-01-01

    This article examines how laptop computing technology, teacher professional development, and systematic support resulted in changed teaching practices and increased student achievement in 47 K-12 schools in 11 Florida school districts. The overview of a large-scale study documents the type and magnitude of change in student-centered teaching,…

  13. To Think Different: The Unexpected Consequences of Personal Computer and Internet Use

    Science.gov (United States)

    Moellinger, Terry

    2010-01-01

    This study examines the contemporary user patterns that emerged when a new medium--the personal computer and the Internet--was introduced into the user's media ecology. The study focuses on the introductory period and current usage. Data analysis conformed to practices accepted by oral historians (Richie, 2003, and Brundage, 2008), and grounded…

  14. German cardiac CT registry: indications, procedural data and clinical consequences in 7061 patients undergoing cardiac computed tomography.

    Science.gov (United States)

    Marwan, Mohamed; Achenbach, Stephan; Korosoglou, Grigorios; Schmermund, Axel; Schneider, Steffen; Bruder, Oliver; Hausleiter, Jörg; Schroeder, Stephen; Barth, Sebastian; Kerber, Sebastian; Leber, Alexander; Moshage, Werner; Senges, Jochen

    2017-12-01

    Cardiac computed tomography permits quantification of coronary calcification as well as detection of coronary artery stenoses after contrast enhancement. Moreover, cardiac CT offers high-resolution morphologic and functional imaging of cardiac structures which is valuable for various structural heart disease interventions and electrophysiology procedures. So far, only limited data exist regarding the spectrum of indications, image acquisition parameters as well as results and clinical consequences of cardiac CT examinations using state-of-the-art CT systems in experienced centers. Twelve cardiology centers with profound expertise in cardiovascular imaging participated in the German Cardiac CT Registry. Criteria for participation included adequate experience in cardiac CT as well of the availability of a 64-slice or newer CT system. Between 2009 and 2014, 7061 patients were prospectively enrolled. For all cardiac CT examinations, patient parameters, procedural data, indication and clinical consequences of the examination were documented. Mean patient age was 61 ± 12 years, 63% were males. The majority (63%) of all cardiac CT examinations were performed in an outpatient setting, 37% were performed during an inpatient stay. 91% were elective and 9% were scheduled in an acute setting. In most examinations (48%), reporting was performed by cardiologists, in 4% by radiologists and in 47% of the cases as a consensus reading. Cardiac CT was limited to native acquisitions for assessment of coronary artery calcification in 9% of patients, only contrast-enhanced coronary CT angiography was performed in 16.6% and combined native and contrast-enhanced coronary CT angiography was performed in 57.7% of patients. Non-coronary cardiac CT examinations constituted 16.6% of all cases. Coronary artery calcification assessment was performed using prospectively ECG-triggered acquisition in 76.9% of all cases. The median dose length product (DLP) was 42 mGy cm (estimated effective

  15. Flow simulation and high performance computing

    Science.gov (United States)

    Tezduyar, T.; Aliabadi, S.; Behr, M.; Johnson, A.; Kalro, V.; Litke, M.

    1996-10-01

    Flow simulation is a computational tool for exploring science and technology involving flow applications. It can provide cost-effective alternatives or complements to laboratory experiments, field tests and prototyping. Flow simulation relies heavily on high performance computing (HPC). We view HPC as having two major components. One is advanced algorithms capable of accurately simulating complex, real-world problems. The other is advanced computer hardware and networking with sufficient power, memory and bandwidth to execute those simulations. While HPC enables flow simulation, flow simulation motivates development of novel HPC techniques. This paper focuses on demonstrating that flow simulation has come a long way and is being applied to many complex, real-world problems in different fields of engineering and applied sciences, particularly in aerospace engineering and applied fluid mechanics. Flow simulation has come a long way because HPC has come a long way. This paper also provides a brief review of some of the recently-developed HPC methods and tools that has played a major role in bringing flow simulation where it is today. A number of 3D flow simulations are presented in this paper as examples of the level of computational capability reached with recent HPC methods and hardware. These examples are, flow around a fighter aircraft, flow around two trains passing in a tunnel, large ram-air parachutes, flow over hydraulic structures, contaminant dispersion in a model subway station, airflow past an automobile, multiple spheres falling in a liquid-filled tube, and dynamics of a paratrooper jumping from a cargo aircraft.

  16. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available gaining popularity in South Africa. One reason for this relatively slow adoption is the lack of appropriate scientific computing infrastructure. Open and distributed high-performance computing (HPC) represents a radically new computing concept for data...

  17. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Y.C. [Square Y Consultants, Orchard Park, NY (US); Chen, S.Y.; Biwer, B.M.; LePoire, D.J. [Argonne National Lab., IL (US)

    1995-11-01

    This report presents the technical details of RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, interactive program that can be run on an IBM or equivalent personal computer under the Windows{trademark} environment. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incident-free models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionuclide inventory and dose conversion factors. In addition, the flexibility of the models allows them to be used for assessing any accidental release involving radioactive materials. The RISKIND code allows for user-specified accident scenarios as well as receptor locations under various exposure conditions, thereby facilitating the estimation of radiological consequences and health risks for individuals. Median (50% probability) and typical worst-case (less than 5% probability of being exceeded) doses and health consequences from potential accidental releases can be calculated by constructing a cumulative dose/probability distribution curve for a complete matrix of site joint-wind-frequency data. These consequence results, together with the estimated probability of the entire spectrum of potential accidents, form a comprehensive, probabilistic risk assessment of a spent nuclear fuel transportation accident.

  18. The risk ogf high-risk jobs : psychological health consequences in forensic physicians and ambulance workers

    NARCIS (Netherlands)

    Ploeg, E. van der

    2003-01-01

    The risk of high-risk jobs: Psychological health consequences in forensic doctors and ambulance workers This thesis has shown that forensic physicians and ambulance personnel frequently suffer from psychological complaints as a result of dramatic events and sources of chronic work stress. A

  19. 49 CFR 195.452 - Pipeline integrity management in high consequence areas.

    Science.gov (United States)

    2010-10-01

    ... failure would affect the high consequence area, such as location of the water intake. (h) What actions...). (B) A dent located on the bottom of the pipeline that has any indication of metal loss, cracking or a... than NPS 12). (C) A dent located on the bottom of the pipeline with a depth greater than 6% of the...

  20. Women's Ways of Drinking: College Women, High-Risk Alcohol Use, and Negative Consequences

    Science.gov (United States)

    Smith, Margaret A.; Berger, Joseph B.

    2010-01-01

    The purpose of this study was to explore college women's high-risk alcohol use and related consequences. This study employed a qualitative approach to understand and provide visibility for a gender-related perspective on college women's alcohol experiences and related outcomes. Data were collected from interviews with 10 undergraduate females at a…

  1. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Y.C. [Square Y, Orchard Park, NY (United States); Chen, S.Y.; LePoire, D.J. [Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Rothman, R. [USDOE Idaho Field Office, Idaho Falls, ID (United States)

    1993-02-01

    This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors.

  2. Antecedent and Consequence of Social Computing Behavior for Social Network Sites: Perspective of Social Influence Theory

    OpenAIRE

    Abdillah, Willy

    2011-01-01

    This research is a preliminary study to develop and examine the adoption model of social computing. Research model is developed upon the Social Influence Factors, Technology Acceptance Model, and Psychosocial Dysfunction. Research design was employed online and self-administered survey questionnaire. Data of 116 samples were analysed using Partial Least Square (PLS) technique. Results suggest that proposed model has met criteria of goodness-of-fit model and indicate that identification is an ...

  3. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  4. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  5. High performance computing applications in neurobiological research

    Science.gov (United States)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  6. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  7. High Performance Human-Computer Interfaces

    National Research Council Canada - National Science Library

    Despain, a

    1997-01-01

    Human interfaces to the computer have remained fairly crude since the use of teletypes despite the fact that computer, storage and communication performance have continued to improve by many orders of magnitude...

  8. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  9. Consequences of Urban Stability Conditions for Computational Fluid Dynamics Simulations of Urban Dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, J K; Chan, S T

    2005-11-30

    The validity of omitting stability considerations when simulating transport and dispersion in the urban environment is explored using observations from the Joint URBAN 2003 field experiment and computational fluid dynamics simulations of that experiment. Four releases of sulfur hexafluoride, during two daytime and two nighttime intensive observing periods, are simulated using the building-resolving computational fluid dynamics model, FEM3MP to solve the Reynolds Averaged Navier-Stokes equations with two options of turbulence parameterizations. One option omits stability effects but has a superior turbulence parameterization using a non-linear eddy viscosity (NEV) approach, while the other considers buoyancy effects with a simple linear eddy viscosity (LEV) approach for turbulence parameterization. Model performance metrics are calculated by comparison with observed winds and tracer data in the downtown area, and with observed winds and turbulence kinetic energy (TKE) profiles at a location immediately downwind of the central business district (CBD) in the area we label as the urban shadow. Model predictions of winds, concentrations, profiles of wind speed, wind direction, and friction velocity are generally consistent with and compare reasonably well with the field observations. Simulations using the NEV turbulence parameterization generally exhibit better agreement with observations. To further explore this assumption of a neutrally-stable atmosphere within the urban area, TKE budget profiles slightly downwind of the urban wake region in the 'urban shadow' are examined. Dissipation and shear production are the largest terms which may be calculated directly. The advection of TKE is calculated as a residual; as would be expected downwind of an urban area, the advection of TKE produced within the urban area is a very large term. Buoyancy effects may be neglected in favor of advection, shear production, and dissipation. For three of the IOPs, buoyancy

  10. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase... Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research...AND SUBTITLE A Heterogeneous High-Performance System for Computational and Computer Science 5a. CONTRACT NUMBER W911NF-15-1-0023 5b

  11. Uncertainties in radiative transfer computations: consequences on the ocean color products

    Science.gov (United States)

    Dilligeard, Eric; Zagolski, Francis; Fischer, Juergen; Santer, Richard P.

    2003-05-01

    Operational MERIS (MEdium Resolution Imaging Spectrometer) level-2 processing uses auxiliary data generated by two radiative transfer tools. These two codes simulate upwelling radiances within a coupled 'Atmosphere-Ocean' system, using different approaches based on the matrix-operator method (MOMO) and the successive orders (SO) technique. Intervalidation of these two radiative transfer codes was performed in order to implement them in the MERIS level-2 processing. MOMO and SO simulations were then conducted on a set of representative test cases. Results stressed both for all test cases good agreements were observed. The scattering processes are retrieved within a few tenths of a percent. Nevertheless, some substantial discrepancies occurred if the polarization is not taken into account mainly in the Rayleigh scattering computations. A preliminary study indicates that the impact of the code inaccuracy in the water leaving radiances retrieval (a level-2 MERIS product) is large, up to 50% in relative difference. Applying the OC2 algorithm, the effect on the retrieval chlorophyll concentration is less than 10%.

  12. Consequences of fiducial marker error on three-dimensional computer animation of the temporomandibular joint

    Science.gov (United States)

    Leader, J. Ken, III; Boston, J. Robert; Rudy, Thomas E.; Greco, Carol M.; Zaki, Hussein S.

    2001-05-01

    Jaw motion has been used to diagnose jaw pain patients, and we have developed a 3D computer animation technique to study jaw motion. A customized dental clutch was worn during motion, and its consistent and rigid placement was a concern. The experimental protocol involved mandibular movements (vertical opening) and MR imaging. The clutch contained three motion markers used to collect kinematic data and four MR markers used as fiducial markers in the MR images. Fiducial marker misplacement was mimicked by analytically perturbing the position of the MR markers +/- 2, +/- 4, and +/- 6 degrees in the three anatomical planes. The percent difference between the original and perturbed MR marker position was calculated for kinematic parameters. The maximum difference across all perturbations for axial rotation, coronal rotation, sagittal rotation, axial translation, coronal translation, and sagittal translation were 176.85%, 191.84%, 0.64%, 9.76%, 80.75%, and 8.30%, respectively, for perturbing all MR markers, and 86.47%, 93.44%, 0.23%, 7.08%, 42.64%, and 13.64%, respectively, for perturbing one MR marker. The parameters representing movement in the sagittal plane, the dominant plane in vertical opening, were determined to be reasonably robust, while secondary movements in the axial and coronal planes were not considered robust.

  13. FitzPatrick Lecture: King George III and the porphyria myth - causes, consequences and re-evaluation of his mental illness with computer diagnostics

    National Research Council Canada - National Science Library

    Peters, Timothy

    2015-01-01

    .... This article explores some of the causes of this misdiagnosis and the consequences of the misleading claims, also reporting on the nature of the king's recurrent mental illness according to computer diagnostics...

  14. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  15. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  16. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  17. Does Alcohol Use Mediate the Association between Consequences Experienced in High School and Consequences Experienced during the First Semester of College?

    Science.gov (United States)

    Romosz, Ann Marie; Quigley, Brian M.

    2013-01-01

    Approximately 80% of college students drink alcohol; almost half of these students reporting that they drink to get drunk and over 22% engage in heavy episodic drinking. Heavy alcohol consumption during the transition from high school to college is associated with negative personal and academic consequences. Sixty-seven freshmen volunteered to…

  18. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their

  19. Software Synthesis for High Productivity Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bodik, Rastislav [Univ. of Washington, Seattle, WA (United States)

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  20. Alcohol-Related Sexual Consequences during the Transition from High School to College

    Science.gov (United States)

    Orchowski, Lindsay M.; Barnett, Nancy P.

    2012-01-01

    Alcohol use and risky sexual behavior are significant problems on college campuses. Using a prospective design, the present study sought to explore the relationship between alcohol use and experience of alcohol-related sexual consequences (ARSC) during the transition from high school to the first year of college. During the senior year of high school, and following the first year of college, participants completed assessments of alcohol use, problem drinking behavior, ARSC, and potential influences on drinking behaviors, including parental knowledge of alcohol use, peer influences, motivation for alcohol use, and mood state. Data indicated that 29% of men and 35% of women indicated some form of ARSC during the last year of high school, rates that increased by 6-7% for the first year of college (36% of men and 41% of women). The onset or recurrence of ARSC in college was not explained by differential increases in alcohol use between high school and college. Low levels of positive affect, low motivation to consume alcohol to cope, and high levels of peer alcohol use were associated with repeated ARSC in high school and college; whereas drinking to enhance positive affect and low parental knowledge of alcohol use were associated with the onset of such consequences in college. Implications for intervention are discussed. PMID:22115596

  1. High school students' perception of computer laboratory learning ...

    African Journals Online (AJOL)

    This study focused on senior high school students' perception of their computer laboratory learning environment and how the use of computers affects their learning in urban and community senior high schools. Data was obtained with the Computer Laboratory Environment Inventory questionnaire, administered to 278 ...

  2. Key drivers and economic consequences of high-end climate scenarios: uncertainties and risks

    DEFF Research Database (Denmark)

    Halsnæs, Kirsten; Kaspersen, Per Skougaard; Drews, Martin

    2015-01-01

    The consequences of high-end climate scenarios and the risks of extreme events involve a number of critical assumptions and methodological challenges related to key uncertainties in climate scenarios and modelling, impact analysis, and economics. A methodological framework for integrated analysis...... of extreme events and damage costs is developed and applied to a case study of urban flooding for the medium sized Danish city of Odense. Moving from our current climate to higher atmospheric greenhouse gas (GHG) concentrations including a 2°, 4°, and a high-end 6°C scenario implies that the frequency...

  3. High-performance Scientific Computing using Parallel Computing to Improve Performance Optimization Problems

    Directory of Open Access Journals (Sweden)

    Florica Novăcescu

    2011-10-01

    Full Text Available HPC (High Performance Computing has become essential for the acceleration of innovation and the companies’ assistance in creating new inventions, better models and more reliable products as well as obtaining processes and services at low costs. The information in this paper focuses particularly on: description the field of high performance scientific computing, parallel computing, scientific computing, parallel computers, and trends in the HPC field, presented here reveal important new directions toward the realization of a high performance computational society. The practical part of the work is an example of use of the HPC tool to accelerate solving an electrostatic optimization problem using the Parallel Computing Toolbox that allows solving computational and data-intensive problems using MATLAB and Simulink on multicore and multiprocessor computers.

  4. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational...

  5. Re-assessment of road accident data-analysis policy : applying theory from involuntary, high-consequence, low-probability events like nuclear power plant meltdowns to voluntary, low-consequence, high-probability events like traffic accidents

    Science.gov (United States)

    2002-02-01

    This report examines the literature on involuntary, high-consequence, low-probability (IHL) events like nuclear power plant meltdowns to determine what can be applied to the problem of voluntary, low-consequence high-probability (VLH) events like tra...

  6. Computation of High-Frequency Waves with Random Uncertainty

    KAUST Repository

    Malenova, Gabriela

    2016-01-06

    We consider the forward propagation of uncertainty in high-frequency waves, described by the second order wave equation with highly oscillatory initial data. The main sources of uncertainty are the wave speed and/or the initial phase and amplitude, described by a finite number of random variables with known joint probability distribution. We propose a stochastic spectral asymptotic method [1] for computing the statistics of uncertain output quantities of interest (QoIs), which are often linear or nonlinear functionals of the wave solution and its spatial/temporal derivatives. The numerical scheme combines two techniques: a high-frequency method based on Gaussian beams [2, 3], a sparse stochastic collocation method [4]. The fast spectral convergence of the proposed method depends crucially on the presence of high stochastic regularity of the QoI independent of the wave frequency. In general, the high-frequency wave solutions to parametric hyperbolic equations are highly oscillatory and non-smooth in both physical and stochastic spaces. Consequently, the stochastic regularity of the QoI, which is a functional of the wave solution, may in principle below and depend on frequency. In the present work, we provide theoretical arguments and numerical evidence that physically motivated QoIs based on local averages of |uE|2 are smooth, with derivatives in the stochastic space uniformly bounded in E, where uE and E denote the highly oscillatory wave solution and the short wavelength, respectively. This observable related regularity makes the proposed approach more efficient than current asymptotic approaches based on Monte Carlo sampling techniques.

  7. Global situational awareness and early warning of high-consequence climate change.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.; Carr, Martin J.; Boslough, Mark Bruce Elrick

    2009-08-01

    Global monitoring systems that have high spatial and temporal resolution, with long observational baselines, are needed to provide situational awareness of the Earth's climate system. Continuous monitoring is required for early warning of high-consequence climate change and to help anticipate and minimize the threat. Global climate has changed abruptly in the past and will almost certainly do so again, even in the absence of anthropogenic interference. It is possible that the Earth's climate could change dramatically and suddenly within a few years. An unexpected loss of climate stability would be equivalent to the failure of an engineered system on a grand scale, and would affect billions of people by causing agricultural, economic, and environmental collapses that would cascade throughout the world. The probability of such an abrupt change happening in the near future may be small, but it is nonzero. Because the consequences would be catastrophic, we argue that the problem should be treated with science-informed engineering conservatism, which focuses on various ways a system can fail and emphasizes inspection and early detection. Such an approach will require high-fidelity continuous global monitoring, informed by scientific modeling.

  8. High Performance Photogrammetric Processing on Computer Clusters

    Science.gov (United States)

    Adrov, V. N.; Drakin, M. A.; Sechin, A. Y.

    2012-07-01

    Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  9. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  10. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  11. Configurable computing for high-security/high-performance ambient systems

    OpenAIRE

    Gogniat, Guy; Bossuet, Lilian; Burleson, Wayne

    2005-01-01

    This paper stresses why configurable computing is a promising target to guarantee the hardware security of ambient systems. Many works have focused on configurable computing to demonstrate its efficiency but as far as we know none have addressed the security issue from system to circuit levels. This paper recalls main hardware attacks before focusing on issues to build secure systems on configurable computing. Two complementary views are presented to provide a guide for security and main issues ...

  12. Idle waves in high-performance computing.

    Science.gov (United States)

    Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre

    2015-01-01

    The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.

  13. High Performance Computing for Medical Image Interpretation

    Science.gov (United States)

    1993-10-01

    patterns from which the diagnoses can be made. A general problem arising from this modality is the detection of small I rom td physics poit of view...applied in, e.g., chest radiography and orthodontics (Scott and Symons (1982)). Computer Tomography (CT) applies to all techniques by which...density in the z-direction towards its equilibrium value. T2 is the transverse or spin-spin relaxation time which governs the evolution of the

  14. Defining a Comprehensive Threat Model for High Performance Computational Clusters

    OpenAIRE

    Mogilevsky, Dmitry; Lee, Adam; Yurcik, William

    2005-01-01

    Over the past decade, high performance computational (HPC) clusters have become mainstream in academic and industrial settings as accessible means of computation. Throughout their proliferation, HPC security has been a secondary concern to performance. It is evident, however, that ensuring HPC security presents different challenges than the ones faced when dealing with traditional networks. To design suitable security measures for high performance computing, it is necessary to first realize t...

  15. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  16. Promoting High-Performance Computing and Communications. A CBO Study.

    Science.gov (United States)

    Webre, Philip

    In 1991 the Federal Government initiated the multiagency High Performance Computing and Communications program (HPCC) to further the development of U.S. supercomputer technology and high-speed computer network technology. This overview by the Congressional Budget Office (CBO) concentrates on obstacles that might prevent the growth of the…

  17. From the Editor: The High Performance Computing Act of 1991.

    Science.gov (United States)

    McClure, Charles R.

    1992-01-01

    Discusses issues related to the High Performance Computing and Communication program and National Research and Education Network (NREN) established by the High Performance Computing Act of 1991, including program management, specific program development, affecting policy decisions, access to the NREN, the Department of Education role, and…

  18. NASA High Performance Computing and Communications program

    Science.gov (United States)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1994-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 1(X)-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientists' abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project, exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects, as well as summaries of early accomplishments and the significance, status, and plans for individual research and development programs within each project. Areas of emphasis include benchmarking, testbeds, software and simulation methods.

  19. High-performance computing MRI simulations.

    Science.gov (United States)

    Stöcker, Tony; Vahedipour, Kaveh; Pflugfelder, Daniel; Shah, N Jon

    2010-07-01

    A new open-source software project is presented, JEMRIS, the Jülich Extensible MRI Simulator, which provides an MRI sequence development and simulation environment for the MRI community. The development was driven by the desire to achieve generality of simulated three-dimensional MRI experiments reflecting modern MRI systems hardware. The accompanying computational burden is overcome by means of parallel computing. Many aspects are covered that have not hitherto been simultaneously investigated in general MRI simulations such as parallel transmit and receive, important off-resonance effects, nonlinear gradients, and arbitrary spatiotemporal parameter variations at different levels. The latter can be used to simulate various types of motion, for instance. The JEMRIS user interface is very simple to use, but nevertheless it presents few limitations. MRI sequences with arbitrary waveforms and complex interdependent modules are modeled in a graphical user interface-based environment requiring no further programming. This manuscript describes the concepts, methods, and performance of the software. Examples of novel simulation results in active fields of MRI research are given. (c) 2010 Wiley-Liss, Inc.

  20. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  1. Risk management & organizational uncertainty implications for the assessment of high consequence organizations

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C.T.

    1995-02-23

    Post hoc analyses have demonstrated clearly that macro-system, organizational processes have played important roles in such major catastrophes as Three Mile Island, Bhopal, Exxon Valdez, Chernobyl, and Piper Alpha. How can managers of such high-consequence organizations as nuclear power plants and nuclear explosives handling facilities be sure that similar macro-system processes are not operating in their plants? To date, macro-system effects have not been integrated into risk assessments. Part of the reason for not using macro-system analyses to assess risk may be the impression that standard organizational measurement tools do not provide hard data that can be managed effectively. In this paper, I argue that organizational dimensions, like those in ISO 9000, can be quantified and integrated into standard risk assessments.

  2. International biosecurity symposium : securing high consequence pathogens and toxins : symposium summary.

    Energy Technology Data Exchange (ETDEWEB)

    2004-06-01

    The National Nuclear Security Administration (NNSA) Office of Nonproliferation Policy sponsored an international biosecurity symposium at Sandia National Laboratories (SNL). The event, entitled 'Securing High Consequence Pathogens and Toxins', took place from February 1 to February 6, 2004 and was hosted by Dr. Reynolds M. Salerno, Principal Member of the Technical Staff and Program Manager of the Biosecurity program at Sandia. Over 60 bioscience and policy experts from 14 countries gathered to discuss biosecurity, a strategy aimed at preventing the theft and sabotage of dangerous pathogens and toxins from bioscience facilities. Presentations delivered during the symposium were interspersed with targeted discussions that elucidated, among other things, the need for subsequent regional workshops on biosecurity, and a desire for additional work toward developing international biosecurity guidelines.

  3. High Performance Spaceflight Computing (HPSC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In 2012, the NASA Game Changing Development Program (GCDP), residing in the NASA Space Technology Mission Directorate (STMD), commissioned a High Performance...

  4. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  5. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  6. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  7. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  8. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  9. Nuclear Forces and High-Performance Computing: The Perfect Match

    Energy Technology Data Exchange (ETDEWEB)

    Luu, T; Walker-Loud, A

    2009-06-12

    High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We briefly describe the state of the field and describe how progress in this field will impact the greater nuclear physics community. We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field.

  10. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  11. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  12. Transforming High School Physics with Modeling and Computation

    CERN Document Server

    Aiken, John M

    2013-01-01

    The Engage to Excel (PCAST) report, the National Research Council's Framework for K-12 Science Education, and the Next Generation Science Standards all call for transforming the physics classroom into an environment that teaches students real scientific practices. This work describes the early stages of one such attempt to transform a high school physics classroom. Specifically, a series of model-building and computational modeling exercises were piloted in a ninth grade Physics First classroom. Student use of computation was assessed using a proctored programming assignment, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Student views on computation and its link to mechanics was assessed with a written essay and a series of think-aloud interviews. This pilot study shows computation's ability for connecting scientific practice to the high school science classroom.

  13. Short-term herbivory has long-term consequences in warmed and ambient high Arctic tundra

    Science.gov (United States)

    Little, Chelsea J.; Cutting, Helen; Alatalo, Juha; Cooper, Elisabeth

    2017-02-01

    Climate change is occurring across the world, with effects varying by ecosystem and region but already occurring quickly in high-latitude and high-altitude regions. Biotic interactions are important in determining ecosystem response to such changes, but few studies have been long-term in nature, especially in the High Arctic. Mesic tundra plots on Svalbard, Norway, were subjected to grazing at two different intensities by captive Barnacle geese from 2003-2005, in a factorial design with warming by Open Top Chambers. Warming manipulations were continued through 2014, when we measured vegetation structure and composition as well as growth and reproduction of three dominant species in the mesic meadow. Significantly more dead vascular plant material was found in warmed compared to ambient plots, regardless of grazing history, but in contrast to many short-term experiments no difference in the amount of living material was found. This has strong implications for nutrient and carbon cycling and could feed back into community productivity. Dominant species showed increased flowering in warmed plots, especially in those plots where grazing had been applied. However, this added sexual reproduction did not translate to substantial shifts in vegetative cover. Forbs and rushes increased slightly in warmed plots regardless of grazing, while the dominant shrub, Salix polaris, generally declined with effects dependent on grazing, and the evergreen shrub Dryas octopetala declined with previous intensive grazing. There were no treatment effects on community diversity or evenness. Thus despite no changes in total live abundance, a typical short-term response to environmental conditions, we found pronounced changes in dead biomass indicating that tundra ecosystem processes respond to medium- to long-term changes in conditions caused by 12 seasons of summer warming. We suggest that while high arctic tundra plant communities are fairly resistant to current levels of climate warming

  14. High Energy Computed Tomographic Inspection of Munitions

    Science.gov (United States)

    2016-11-01

    Picatinny scientists test body armor integrity, protect Soldiers’ lives,” http://www.army.mil/ article /94448, Picatinny Arsenal, NJ, 2013. 2. Youngberg, J...The citation in this report of the names of commercial firms or commercially available products or services does not constitute official endorsement by...munitions and weapon systems. In many cases, the use of CT is overlooked or discounted due to its lack of use in high throughput production settings

  15. A Research and Development Strategy for High Performance Computing.

    Science.gov (United States)

    Office of Science and Technology Policy, Washington, DC.

    This report is the result of a systematic review of the status and directions of high performance computing and its relationship to federal research and development. Conducted by the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), the review involved a series of workshops attended by numerous computer scientists and…

  16. Achieving High Performance with FPGA-Based Computing

    Science.gov (United States)

    Herbordt, Martin C.; VanCourt, Tom; Gu, Yongfeng; Sukhwani, Bharat; Conti, Al; Model, Josh; DiSabello, Doug

    2011-01-01

    Numerous application areas, including bioinformatics and computational biology, demand increasing amounts of processing capability. In many cases, the computation cores and data types are suited to field-programmable gate arrays. The challenge is identifying the design techniques that can extract high performance potential from the FPGA fabric. PMID:21603088

  17. An Introduction to Computing: Content for a High School Course.

    Science.gov (United States)

    Rogers, Jean B.

    A general outline of the topics that might be covered in a computers and computing course for high school students is provided. Topics are listed in the order in which they should be taught, and the relative amount of time to be spent on each topic is suggested. Seven units are included in the course outline: (1) general introduction, (2) using…

  18. Towards using direct methods in seismic tomography: computation of the full resolution matrix using high-performance computing and sparse QR factorization

    Science.gov (United States)

    Bogiatzis, Petros; Ishii, Miaki; Davis, Timothy A.

    2016-05-01

    For more than two decades, the number of data and model parameters in seismic tomography problems has exceeded the available computational resources required for application of direct computational methods, leaving iterative solvers the only option. One disadvantage of the iterative techniques is that the inverse of the matrix that defines the system is not explicitly formed, and as a consequence, the model resolution and covariance matrices cannot be computed. Despite the significant effort in finding computationally affordable approximations of these matrices, challenges remain, and methods such as the checkerboard resolution tests continue to be used. Based upon recent developments in sparse algorithms and high-performance computing resources, we show that direct methods are becoming feasible for large seismic tomography problems. We demonstrate the application of QR factorization in solving the regional P-wave structure and computing the full resolution matrix with 267 520 model parameters.

  19. Scientific and high-performance computing at FAIR

    Directory of Open Access Journals (Sweden)

    Kisel Ivan

    2015-01-01

    Full Text Available Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.

  20. Portability Support for High Performance Computing

    Science.gov (United States)

    Cheng, Doreen Y.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    While a large number of tools have been developed to support application portability, high performance application developers often prefer to use vendor-provided, non-portable programming interfaces. This phenomena indicates the mismatch between user priorities and tool capabilities. This paper summarizes the results of a user survey and a developer survey. The user survey has revealed the user priorities and resulted in three criteria for evaluating tool support for portability. The developer survey has resulted in the evaluation of portability support and indicated the possibilities and difficulties of improvements.

  1. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  2. Resource Estimation in High Performance Medical Image Computing

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D.M.

    2015-01-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of ‘jobs’ requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources. PMID:24906466

  3. High performance computing in structural determination by electron cryomicroscopy.

    Science.gov (United States)

    Fernández, J J

    2008-10-01

    Computational advances have significantly contributed to the current role of electron cryomicroscopy (cryoEM) in structural biology. The needs for computational power are constantly growing with the increasing complexity of algorithms and the amount of data needed to push the resolution limits. High performance computing (HPC) is becoming paramount in cryoEM to cope with those computational needs. Since the nineties, different HPC strategies have been proposed for some specific problems in cryoEM and, in fact, some of them are already available in common software packages. Nevertheless, the literature is scattered in the areas of computer science and structural biology. In this communication, the HPC approaches devised for the computation-intensive tasks in cryoEM (single particles and tomography) are retrospectively reviewed and the future trends are discussed. Moreover, the HPC capabilities available in the most common cryoEM packages are surveyed, as an evidence of the importance of HPC in addressing the future challenges.

  4. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    Science.gov (United States)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  5. Mastering the Challenge of High-Performance Computing.

    Science.gov (United States)

    Roach, Ronald

    2003-01-01

    Discusses how, just as all of higher education got serious with wiring individual campuses for the Internet, the nation's leading research institutions have initiated "high-performance computing." Describes several such initiatives involving historically black colleges and universities. (EV)

  6. GPU-based high-performance computing for radiation therapy.

    Science.gov (United States)

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  7. Export Control of High Performance Computing: Analysis and Alternative Strategies

    National Research Council Canada - National Science Library

    Holland, Charles

    2001-01-01

    High performance computing has historically played an important role in the ability of the United States to develop and deploy a wide range of national security capabilities, such as stealth aircraft...

  8. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  9. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  10. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  11. Surety of human elements of high consequence systems: An organic model

    Energy Technology Data Exchange (ETDEWEB)

    FORSYTHE,JAMES C.; WENNER,CAREN A.

    2000-04-25

    Despite extensive safety analysis and application of safety measures, there is a frequent lament, ``Why do we continue to have accidents?'' Two breakdowns are prevalent in risk management and prevention. First, accidents result from human actions that engineers, analysts and management never envisioned and second, controls, intended to preclude/mitigate accident sequences, prove inadequate. This paper addresses the first breakdown, the inability to anticipate scenarios involving human action/inaction. The failure of controls has been addressed in a previous publication (Forsythe and Grose, 1998). Specifically, this paper presents an approach referred to as surety. The objective of this approach is to provide high levels of assurance in situations where potential system failure paths cannot be fully characterized. With regard to human elements of complex systems, traditional approaches to human reliability are not sufficient to attain surety. Consequently, an Organic Model has been developed to account for the organic properties exhibited by engineered systems that result from human involvement in those systems.

  12. High-intensity interval exercise and cerebrovascular health: curiosity, cause, and consequence.

    Science.gov (United States)

    Lucas, Samuel J E; Cotter, James D; Brassard, Patrice; Bailey, Damian M

    2015-06-01

    Exercise is a uniquely effective and pluripotent medicine against several noncommunicable diseases of westernised lifestyles, including protection against neurodegenerative disorders. High-intensity interval exercise training (HIT) is emerging as an effective alternative to current health-related exercise guidelines. Compared with traditional moderate-intensity continuous exercise training, HIT confers equivalent if not indeed superior metabolic, cardiac, and systemic vascular adaptation. Consequently, HIT is being promoted as a more time-efficient and practical approach to optimize health thereby reducing the burden of disease associated with physical inactivity. However, no studies to date have examined the impact of HIT on the cerebrovasculature and corresponding implications for cognitive function. This review critiques the implications of HIT for cerebrovascular function, with a focus on the mechanisms and translational impact for patient health and well-being. It also introduces similarly novel interventions currently under investigation as alternative means of accelerating exercise-induced cerebrovascular adaptation. We highlight a need for studies of the mechanisms and thereby also the optimal dose-response strategies to guide exercise prescription, and for studies to explore alternative approaches to optimize exercise outcomes in brain-related health and disease prevention. From a clinical perspective, interventions that selectively target the aging brain have the potential to prevent stroke and associated neurovascular diseases.

  13. The renal consequences of maternal obesity in offspring are overwhelmed by postnatal high fat diet.

    Directory of Open Access Journals (Sweden)

    Sarah J Glastras

    Full Text Available Developmental programming induced by maternal obesity influences the development of chronic disease in offspring. In the present study, we aimed to determine whether maternal obesity exaggerates obesity-related kidney disease.Female C57BL/6 mice were fed high-fat diet (HFD for six weeks prior to mating, during gestation and lactation. Male offspring were weaned to normal chow or HFD. At postnatal Week 8, HFD-fed offspring were administered one dose streptozotocin (STZ, 100 mg/kg i.p. or vehicle control. Metabolic parameters and renal functional and structural changes were observed at postnatal Week 32.HFD-fed offspring had increased adiposity, glucose intolerance and hyperlipidaemia, associated with increased albuminuria and serum creatinine levels. Their kidneys displayed structural changes with increased levels of fibrotic, inflammatory and oxidative stress markers. STZ administration did not potentiate the renal effects of HFD. Though maternal obesity had a sustained effect on serum creatinine and oxidative stress markers in lean offspring, the renal consequences of maternal obesity were overwhelmed by the powerful effect of diet-induced obesity.Maternal obesity portends significant risks for metabolic and renal health in adult offspring. However, diet-induced obesity is an overwhelming and potent stimulus for the development of CKD that is not potentiated by maternal obesity.

  14. High-intensity interval exercise and cerebrovascular health: curiosity, cause, and consequence

    Science.gov (United States)

    Lucas, Samuel J E; Cotter, James D; Brassard, Patrice; Bailey, Damian M

    2015-01-01

    Exercise is a uniquely effective and pluripotent medicine against several noncommunicable diseases of westernised lifestyles, including protection against neurodegenerative disorders. High-intensity interval exercise training (HIT) is emerging as an effective alternative to current health-related exercise guidelines. Compared with traditional moderate-intensity continuous exercise training, HIT confers equivalent if not indeed superior metabolic, cardiac, and systemic vascular adaptation. Consequently, HIT is being promoted as a more time-efficient and practical approach to optimize health thereby reducing the burden of disease associated with physical inactivity. However, no studies to date have examined the impact of HIT on the cerebrovasculature and corresponding implications for cognitive function. This review critiques the implications of HIT for cerebrovascular function, with a focus on the mechanisms and translational impact for patient health and well-being. It also introduces similarly novel interventions currently under investigation as alternative means of accelerating exercise-induced cerebrovascular adaptation. We highlight a need for studies of the mechanisms and thereby also the optimal dose-response strategies to guide exercise prescription, and for studies to explore alternative approaches to optimize exercise outcomes in brain-related health and disease prevention. From a clinical perspective, interventions that selectively target the aging brain have the potential to prevent stroke and associated neurovascular diseases. PMID:25833341

  15. Architecture and Programming Models for High Performance Intensive Computation

    Science.gov (United States)

    2016-06-29

    AFRL-AFOSR-VA-TR-2016-0230 Architecture and Programming Models for High Performance Intensive Computation XiaoMing Li UNIVERSITY OF DELAWARE Final...TITLE AND SUBTITLE Architecture and Programming Models for High Performance Intensive Computation 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-13-1-0213...developing an efficient system architecture and software tools for building and running Dynamic Data Driven Application Systems (DDDAS). The foremost

  16. High Performance Computing (HPC) Innovation Service Portal Pilots Cloud Computing (HPC-ISP Pilot Cloud Computing)

    Science.gov (United States)

    2011-08-01

    connection. The login node runs Redhat Enterprise Linux (RHEL) 5, and has several responsibilities. It manages the Ghost licenses, runs the compute node...Center PaaS Platform as a service PNCC Power node control center QDR Quad data rate RHEL Redhat Enterprise Linux ROI Return on investment SaaS

  17. Computational Aeroscience at NAS and Future Trends in High Performance Computing

    Science.gov (United States)

    Cooper, David M.

    1994-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) program provides a unique world class supercomputing capability which is readily accessed by the U.S. aeronautics community and offers them the opportunity to perform high speed computations and simulations for a broad range of aerospace research applications. In addition, the NAS program performs on-going research and advanced technology development to ensure the innovative application of newly emerging technologies to computational fluid dynamics and other important aeroscience disciplines. The NAS system and its interaction with industry are described. Furthermore, the results of an analysis of current and future technologies and their potential impact on high performance computing are also presented.

  18. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  19. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  20. Future consequences of decreasing marginal production efficiency in the high-yielding dairy cow.

    Science.gov (United States)

    Moallem, U

    2016-04-01

    The objectives were to examine the gross and marginal production efficiencies in high-yielding dairy cows and the future consequences on dairy industry profitability. Data from 2 experiments were used in across-treatments analysis (n=82 mid-lactation multiparous Israeli-Holstein dairy cows). Milk yields, body weights (BW), and dry matter intakes (DMI) were recorded daily. In both experiments, cows were fed a diet containing 16.5 to 16.6% crude protein and net energy for lactation (NEL) at 1.61 Mcal/kg of dry matter (DM). The means of milk yield, BW, DMI, NEL intake, and energy required for maintenance were calculated individually over the whole study, and used to calculate gross and marginal efficiencies. Data were analyzed in 2 ways: (1) simple correlation between variables; and (2) cows were divided into 3 subgroups, designated low, moderate, and high DMI (LDMI, MDMI, and HDMI), according to actual DMI per day: ≤ 26 kg (n=27); >26 through 28.2 kg (n=28); and >28.2 kg (n=27). The phenotypic Pearson correlations among variables were analyzed, and the GLM procedure was used to test differences between subgroups. The relationships between milk and fat-corrected milk yields and the corresponding gross efficiencies were positive, whereas BW and gross production efficiency were negatively correlated. The marginal production efficiency from DM and energy consumed decreased with increasing DMI. The difference between BW gain as predicted by the National Research Council model (2001) and the present measurements increased with increasing DMI (r=0.68). The average calculated energy balances were 1.38, 2.28, and 4.20 Mcal/d (standard error of the mean=0.64) in the LDMI, MDMI, and HDMI groups, respectively. The marginal efficiency for milk yields from DMI or energy consumed was highest in LDMI, intermediate in MDMI, and lowest in HDMI. The predicted BW gains for the whole study period were 22.9, 37.9, and 75.8 kg for the LDMI, MDMI, and HDMI groups, respectively. The

  1. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  2. Experimental Realization of High-Efficiency Counterfactual Computation.

    Science.gov (United States)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-21

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  3. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  4. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  5. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  6. Learning Consequences of Mobile-Computing Technologies: Differential Impacts on Integrative Learning and Skill-Focused Learning

    Science.gov (United States)

    Kumi, Richard; Reychav, Iris; Sabherwal, Rajiv

    2016-01-01

    Many educational institutions are integrating mobile-computing technologies (MCT) into the classroom to improve learning outcomes. There is also a growing interest in research to understand how MCT influence learning outcomes. The diversity of results in prior research indicates that computer-mediated learning has different effects on various…

  7. Comparison of Sequential Designs of Computer Experiments in High Dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Kupresanin, A. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Johannesson, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-07-21

    We continue a long line of research in applying the design and analysis of computer experiments to the study of real world systems. The problem we consider is that of fitting a Gaussian process model for a computer model in applications where the simulation output is a function of a high dimensional input vector. Our computer experiments are designed sequentially as we learn about the model. We perform an empirical comparison of the effectiveness and efficiency of several statistical criteria that have been used in sequential experimental designs. The specific application that motivates this work comes from climatology.

  8. Full custom VLSI - A technology for high performance computing

    Science.gov (United States)

    Maki, Gary K.; Whitaker, Sterling R.

    1990-01-01

    Full custom VLSI is presented as a viable technology for addressing the need for the computing capabilities required for the real-time health monitoring of spacecraft systems. This technology presents solutions that cannot be realized with stored program computers or semicustom VLSI; also, it is not dependent on current IC processes. It is argued that, while design time is longer, full custom VLSI produces the fastest and densest VLSI solution and that high density normally also yields low manufacturing costs.

  9. High-performance computer management based on Java

    OpenAIRE

    Sander, Volker; Erwin, Dietmar; Huber, Valentina

    1998-01-01

    Coupling of distributed computer resources connected by a high speed network to one virtual computer is the basic idea of a metacomputer. Access to the metacomputer should be provided by an intuitive graphical user interface (GUI), ideally WWW based. This paper presents a metacomputer architecture using a Java based GUI. The concept will be discussed with regard to security, communication, scalability, and the integration into existing frameworks.

  10. Computer Literacy and the Construct Validity of a High-Stakes Computer-Based Writing Assessment

    Science.gov (United States)

    Jin, Yan; Yan, Ming

    2017-01-01

    One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…

  11. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  12. Proceedings from the conference on high speed computing: High speed computing and national security

    Energy Technology Data Exchange (ETDEWEB)

    Hirons, K.P.; Vigil, M.; Carlson, R. [comps.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  13. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  14. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  15. Computer Security: SAHARA - Security As High As Reasonably Achievable

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    History has shown us time and again that our computer systems, computing services and control systems have digital security deficiencies. Too often we deploy stop-gap solutions and improvised hacks, or we just accept that it is too late to change things.    In my opinion, this blatantly contradicts the professionalism we show in our daily work. Other priorities and time pressure force us to ignore security or to consider it too late to do anything… but we can do better. Just look at how “safety” is dealt with at CERN! “ALARA” (As Low As Reasonably Achievable) is the objective set by the CERN HSE group when considering our individual radiological exposure. Following this paradigm, and shifting it from CERN safety to CERN computer security, would give us “SAHARA”: “Security As High As Reasonably Achievable”. In other words, all possible computer security measures must be applied, so long as ...

  16. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  17. High performance flight computer developed for deep space applications

    Science.gov (United States)

    Bunker, Robert L.

    1993-01-01

    The development of an advanced space flight computer for real time embedded deep space applications which embodies the lessons learned on Galileo and modern computer technology is described. The requirements are listed and the design implementation that meets those requirements is described. The development of SPACE-16 (Spaceborne Advanced Computing Engine) (where 16 designates the databus width) was initiated to support the MM2 (Marine Mark 2) project. The computer is based on a radiation hardened emulation of a modern 32 bit microprocessor and its family of support devices including a high performance floating point accelerator. Additional custom devices which include a coprocessor to improve input/output capabilities, a memory interface chip, and an additional support chip that provide management of all fault tolerant features, are described. Detailed supporting analyses and rationale which justifies specific design and architectural decisions are provided. The six chip types were designed and fabricated. Testing and evaluation of a brass/board was initiated.

  18. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  19. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  20. An ergonomic study on the biomechanical consequences in children, generated by the use of computers at school.

    Science.gov (United States)

    Paraizo, Claudia; de Moraes, Anamaria

    2012-01-01

    This research deals with the influence of the computer use in schools related to the children posture, in an ergonomic point of view. The research tries to identify probable causes for the children early postural constraints, relating it to the sedentary behavior and the lack of an ergonomic project in schools. The survey involved 186 children, between 8 and 12 years old, students of a private school in Rio de Janeiro-Brasil. An historical and theoretical school furniture research was conducted as well as a survey with the students and teachers, computer postural evaluation, ergonomic evaluation (RULA method), and observations in the computer classroom. The research dealt with the student's perception with respect to the furniture utilized by him in the classroom during the use of the computer, his body complaint, the time he spent working on the school computer and the possibility of the existence of sedentariness. Also deals with the teachers' perception and knowledge regarding ergonomics with reference to schoolroom furniture and its Regulatory Norms (RN). The purpose of the research work is to highlight the importance of this knowledge, having in view the possibility of the teachers' collaboration in the ergonomic adaptation of the classroom environment and in their conscientious opinion during the purchasing of this furniture. A questionnaire was utilized and its results showed some discontent on the part of the teachers with relation to the schoolroom furniture as well as the teachers' scant knowledge of Ergonomics.We conclude with a survey that despite the children had constraints in postural assessments and school furniture needs a major ergonomic action, the time that children use the computer at school is small compared with the time of use at home and therefore insufficient to be the main cause of quantified commitments, thus the study of computer use at home as a development and continuity of this research.

  1. Perceived Sexual Benefits of Alcohol Use among Recent High School Graduates: Longitudinal Associations with Drinking Behavior and Consequences

    Science.gov (United States)

    Brady, Sonya S.; Wilkerson, J. Michael; Jones-Webb, Rhonda

    2012-01-01

    In this research study of 153 college-bound students, perceived sexual benefits of alcohol use were associated with greater drinking and related consequences during the senior year of high school and freshman year of college. Perceived benefits predicted drinking outcomes during fall after adjustment for gender, sensation seeking, parental…

  2. Behavioural and physiological consequences of a single social defeat in Roman high- and low-avoidance rats

    NARCIS (Netherlands)

    Meerlo, P; Overkamp, GJF; Koolhaas, JM

    1997-01-01

    The behavioural and physiological consequences of a single social defeat were studied in Roman high-avoidance (RHA) and Roman low-avoidance (RLA) rats, two rat lines with a genetically determined difference in the way of responding to or coping with stressors. Animals were subjected to social defeat

  3. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  4. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  5. Personal Decision Factors Considered by Information Technology Executives: Their Impacts on Business Intentions and Consequent Cloud Computing Services Adoption Rates

    Science.gov (United States)

    Smith, Marcus L., Jr.

    2016-01-01

    During its infancy, the cloud computing industry was the province largely of small and medium-sized business customers. Despite their size, these companies required a professionally run, yet economical information technology (IT) operation. These customers used a total value strategy whereby they avoided paying for essential, yet underutilized,…

  6. Dosimetric consequences of the shift towards computed tomography guided target definition and planning for breast conserving radiotherapy

    NARCIS (Netherlands)

    van der Laan, Hans Paul; Dolsma, Willemtje; Maduro, John H; Korevaar, Erik W; Langendijk, Johannes A

    2008-01-01

    Background: The shift from conventional two-dimensional (2D) to three-dimensional (3D)conformal target definition and dose-planning seems to have introduced volumetric as well as geometric changes. The purpose of this study was to compare coverage of computed tomography (CT)-based breast and boost

  7. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  8. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  9. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  10. Replica-Based High-Performance Tuple Space Computing

    DEFF Research Database (Denmark)

    Andric, Marina; De Nicola, Rocco; Lluch Lafuente, Alberto

    2015-01-01

    We present the tuple-based coordination language RepliKlaim, which enriches Klaim with primitives for replica-aware coordination. Our overall goal is to offer suitable solutions to the challenging problems of data distribution and locality in large-scale high performance computing. In particular,...

  11. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    ORGANIZATION REPORT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 10. SPONSOR / MONITOR’S ACRONYM(S)9. SPONSORING / MONITORING AGENCY...the high computing power of the main supercomputer. Each supercomputer is different in node architecture as well as hardware specifications. 2

  12. High Performance Computing and Networking for Science--Background Paper.

    Science.gov (United States)

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  13. Seeking Solution: High-Performance Computing for Science. Background Paper.

    Science.gov (United States)

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for…

  14. High Performance Computing and Communications: Toward a National Information Infrastructure.

    Science.gov (United States)

    Federal Coordinating Council for Science, Engineering and Technology, Washington, DC.

    This report describes the High Performance Computing and Communications (HPCC) initiative of the Federal Coordinating Council for Science, Engineering, and Technology. This program is supportive of and coordinated with the National Information Infrastructure Initiative. Now halfway through its 5-year effort, the HPCC program counts among its…

  15. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    Science.gov (United States)

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  16. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  17. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  18. Topology and Non-Deterministic Polynomial Time Computation : Avoidance of The Misbehaviour of Hub-Free Diagrams and Consequences

    Directory of Open Access Journals (Sweden)

    Anthony Gasperin

    2013-09-01

    Full Text Available To study groups with small Dehn's function, Olshanskii and Sapir developed a new invariant of bipartite chords diagrams and applied it to hub-free realization of S-machines. In this paper we consider this new invariant together with groups constructed from S-machines containing the hub relation. The idea is to study the links between the topology of the asymptotic cones and polynomial time computations. Indeed it is known that the topology of such metric space depends on diagrams without hubs that do not correspond to the computations of the considered S-machine. This work gives sufficient conditions that avoid this misbehaviour, but as we shall see the method has a significant drawback.

  19. Evaluation of Current Computer Models Applied in the DOE Complex for SAR Analysis of Radiological Dispersion & Consequences

    Energy Technology Data Exchange (ETDEWEB)

    O' Kula, K. R. [Savannah River Site (SRS), Aiken, SC (United States); East, J. M. [Savannah River Site (SRS), Aiken, SC (United States); Weber, A. H. [Savannah River Site (SRS), Aiken, SC (United States); Savino, A. V. [Savannah River Site (SRS), Aiken, SC (United States); Mazzola, C. A. [Savannah River Site (SRS), Aiken, SC (United States)

    2003-01-01

    The evaluation of atmospheric dispersion/ radiological dose analysis codes included fifteen models identified in authorization basis safety analysis at DOE facilities, or from regulatory and research agencies where past or current work warranted inclusion of a computer model. All computer codes examined were reviewed using general and specific evaluation criteria developed by the Working Group. The criteria were based on DOE Orders and other regulatory standards and guidance for performing bounding and conservative dose calculations. Included were three categories of criteria: (1) Software Quality/User Interface; (2) Technical Model Adequacy; and (3) Application/Source Term Environment. A consensus-based limited quantitative ranking process was used to base an order of model preference as both an overall conclusion, and under specific conditions.

  20. [Genotoxic modification of nucleic acid bases and biological consequences of it. Review and prospects of experimental and computational investigations

    Science.gov (United States)

    Poltev, V. I.; Bruskov, V. I.; Shuliupina, N. V.; Rein, R.; Shibata, M.; Ornstein, R.; Miller, J.

    1993-01-01

    The review is presented of experimental and computational data on the influence of genotoxic modification of bases (deamination, alkylation, oxidation) on the structure and biological functioning of nucleic acids. Pathways are discussed for the influence of modification on coding properties of bases, on possible errors of nucleic acid biosynthesis, and on configurations of nucleotide mispairs. The atomic structure of nucleic acid fragments with modified bases and the role of base damages in mutagenesis and carcinogenesis are considered.

  1. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  2. Consequence Based Design. An approach for integrating computational collaborative models (Integrated Dynamic Models) in the building design phase

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer

    that secures validity and quality assurance with a simulationist while sustaining autonomous control of building design with the building designer. Consequence based design is defined by the specific use of integrated dynamic models. These models include the parametric capabilities of a visual programming tool...... case studies. All case studies concern building design projects performed in collaboration with Grontmij and various Danish architectural studios. Different types of integrated dynamic models have been implemented and tested for the individual projects. The findings from each project were used to alter......In the wake of uncompromising requirements on building performance and the current emphasis on building energy consumption and indoor environment, designing buildings has become an increasingly difficult task. However, building performance analyses, including those of building energy consumption...

  3. Choice & Consequence

    DEFF Research Database (Denmark)

    Khan, Azam

    To move toward environmental sustainability, we propose that a computational approach may be needed due to the complexity of resource production and consumption. While digital sensors and predictive simulation has the potential to help us to minimize resource consumption, the indirect relation...... between cause and effect in complex systems complicates decision making. To address this issue, we examine the central role that data-driven decision making could play in critical domains such as sustainability or medical treatment. We developed systems for exploratory data analysis and data visualization...... of data analysis and instructional interface design, to both simulation systems and decision support interfaces. We hope that projects such as these will help people to understand the link between their choices and the consequences of their decisions....

  4. The design of linear algebra libraries for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States); Walker, D.W. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  5. Probability, consequences, and mitigation for lightning strikes of Hanford high level waste tanks

    Energy Technology Data Exchange (ETDEWEB)

    Zach, J.J.

    1996-06-05

    The purpose of this report is to summarize selected lightning issues concerning the Hanford Waste Tanks. These issues include the probability of a lightning discharge striking the area immediately adjacent to a tank including a riser, the consequences of significant energy deposition from a lightning strike in a tank, and mitigating actions that have been or are being taken. The major conclusion of this report is that the probability of a lightning strike deposition sufficient energy in a tank to cause an effect on employees or the public is unlikely;but there are insufficient, quantitative data on the tanks and waste to prove that. Protection, such as grounding of risers and air terminals on existing light poles, is recommended.

  6. Probability, consequences, and mitigation for lightning strikes to Hanford site high-level waste tanks

    Energy Technology Data Exchange (ETDEWEB)

    Zach, J.J.

    1996-08-01

    The purpose of this report is to summarize selected lightning issues concerning the Hanford Waste Tanks. These issues include the probability of lightning discharge striking the area immediately adjacent to a tank including a riser, the consequences of significant energy deposition from a lightning strike in a tank, and mitigating actions that have been or are being taken. The major conclusion of this report is that the probability of a lightning strike depositing sufficient energy in a tank to cause an effect on employees or the public is unlikely;but there are insufficient, quantitative data on the tanks and waste to prove that. Protection, such as grounding of risers and air terminals on existing light poles, is recommended.

  7. Multislice computed tomography in an asymptomatic high-risk population.

    Science.gov (United States)

    Romeo, Francesco; Leo, Roberto; Clementi, Fabrizio; Razzini, Cinzia; Borzi, Mauro; Martuscelli, Eugenio; Pizzuto, Francesco; Chiricolo, Gaetano; Mehta, Jawahar L

    2007-02-01

    Approximately 50% of all acute coronary syndromes occur in previously asymptomatic patients. This study evaluated the value of multislice computed tomography for early detection of significant coronary artery disease (CAD) in high-risk asymptomatic subjects. One hundred sixty-eight asymptomatic subjects with >or=1 major risk factor (hypertension, diabetes, hypercholesterolemia, family history, or smoking) and an inconclusive or unfeasible noninvasive stress test result (stress electrocardiography, echocardiography, or nuclear scintigraphy) were evaluated in an outpatient setting. After clinical examination and laboratory risk analysis, all patients underwent multislice computed tomographic (MSCT) coronary angiography within 1 week. In all subjects, conventional coronary angiography was also carried out. Multislice computed tomography displayed single-vessel CAD in 16% of patients, 2-vessel CAD in 7%, and 3-vessel CAD in 4%. Selective coronary angiography confirmed the results of multislice computed tomography in 99% of all patients. Sensitivity and specificity of MSCT coronary angiography were 100% and 98%, respectively, with a positive predictive value of 95% and a negative predictive value of 100%. In conclusion, MSCT coronary angiography is an excellent noninvasive technique for early identification of significant CAD in high-risk asymptomatic patients with inconclusive or unfeasible noninvasive stress test results.

  8. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2014-01-01

    Full Text Available Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  9. CAUSES AND CONSEQUENCES OF THE SCHOOL IN HIGH SCHOOL DROPOUT: CASE UNIVERSIDAD AUTÓNOMA DE SINALOA

    Directory of Open Access Journals (Sweden)

    Rosalva Ruiz-Ramírez

    2014-07-01

    Full Text Available The present investigation has the objective to establish the personal, economic and social causes and consequences that create school desertion of high school in Universidad Autónoma de Sinaloa (UAS. The investigation took place in the high school located in the municipality of El Fuerte, Sinaloa, in the academic unit (UA of San Blas and its extensions The Constancia and The Higueras of the Natoches in 2013. A mixed approach was used to analyze qualitative and quantitative information; the studied population was 18 women and 17 men deserters of the school cycle 2011-2012, ten teachers, four directors and twenty non-deserting students. In the results one can see that the principal factor for school desertion was the personnel to be married and not approving classes. The main consequence was economic, highlighting that the poverty cycle is hard to break.

  10. High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy.

    Science.gov (United States)

    Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D

    2008-08-01

    The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is

  11. High performance transcription factor-DNA docking with GPU computing.

    Science.gov (United States)

    Wu, Jiadong; Hong, Bo; Takeda, Takako; Guo, Jun-Tao

    2012-06-21

    Protein-DNA docking is a very challenging problem in structural bioinformatics and has important implications in a number of applications, such as structure-based prediction of transcription factor binding sites and rational drug design. Protein-DNA docking is very computational demanding due to the high cost of energy calculation and the statistical nature of conformational sampling algorithms. More importantly, experiments show that the docking quality depends on the coverage of the conformational sampling space. It is therefore desirable to accelerate the computation of the docking algorithm, not only to reduce computing time, but also to improve docking quality. In an attempt to accelerate the sampling process and to improve the docking performance, we developed a graphics processing unit (GPU)-based protein-DNA docking algorithm. The algorithm employs a potential-based energy function to describe the binding affinity of a protein-DNA pair, and integrates Monte-Carlo simulation and a simulated annealing method to search through the conformational space. Algorithmic techniques were developed to improve the computation efficiency and scalability on GPU-based high performance computing systems. The effectiveness of our approach is tested on a non-redundant set of 75 TF-DNA complexes and a newly developed TF-DNA docking benchmark. We demonstrated that the GPU-based docking algorithm can significantly accelerate the simulation process and thereby improving the chance of finding near-native TF-DNA complex structures. This study also suggests that further improvement in protein-DNA docking research would require efforts from two integral aspects: improvement in computation efficiency and energy function design. We present a high performance computing approach for improving the prediction accuracy of protein-DNA docking. The GPU-based docking algorithm accelerates the search of the conformational space and thus increases the chance of finding more near

  12. The Consequences of Commercialization Choices for New Entrants in High-Tech Industries: A Venture Emergence Perspective

    DEFF Research Database (Denmark)

    Giones, Ferran; Gurses, Kerem

    for these different markets. We test our hypotheses on longitudinal dataset of 453 new firms started in 2004 in different high-tech industries in the US. We find that that technology and human capital resources favor the adoption of alternative commercialization strategies; nevertheless, we do not observe significant...... differences in the venture emergence or survival likelihood. Our findings offer a closer view of the venture emergence process of new firms, clarifying the causes and consequences of the technology commercialization choices....

  13. RNA secondary structure prediction using highly parallel computers.

    Science.gov (United States)

    Nakaya, A; Yamamoto, K; Yonezawa, A

    1995-12-01

    An RNA secondary structure prediction method using a highly parallel computer is reported. We focus on finding thermodynamically stable structures of a single-stranded RNA molecule. Our approach is based on a parallel combinatorial method which calculates the free energy of a molecule as the sum of the free energies of all the physically possible hydrogen bonds. Our parallel algorithm finds many highly stable structures all at once, while most of the conventional prediction methods find only the most stable structure. The important idea in our algorithm is search tree pruning, with dynamic load balancing across the processor elements in a parallel computer. Software tools for visualization and classification of secondary structures are also presented using the sequence of cadang-cadang coconut viroid as an example. Our software system runs on CM-5.

  14. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  15. Opportunities and challenges of high-performance computing in chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Guest, M.F.; Kendall, R.A.; Nichols, J.A. [eds.] [and others

    1995-06-01

    The field of high-performance computing is developing at an extremely rapid pace. Massively parallel computers offering orders of magnitude increase in performance are under development by all the major computer vendors. Many sites now have production facilities that include massively parallel hardware. Molecular modeling methodologies (both quantum and classical) are also advancing at a brisk pace. The transition of molecular modeling software to a massively parallel computing environment offers many exciting opportunities, such as the accurate treatment of larger, more complex molecular systems in routine fashion, and a viable, cost-effective route to study physical, biological, and chemical `grand challenge` problems that are impractical on traditional vector supercomputers. This will have a broad effect on all areas of basic chemical science at academic research institutions and chemical, petroleum, and pharmaceutical industries in the United States, as well as chemical waste and environmental remediation processes. But, this transition also poses significant challenges: architectural issues (SIMD, MIMD, local memory, global memory, etc.) remain poorly understood and software development tools (compilers, debuggers, performance monitors, etc.) are not well developed. In addition, researchers that understand and wish to pursue the benefits offered by massively parallel computing are often hindered by lack of expertise, hardware, and/or information at their site. A conference and workshop organized to focus on these issues was held at the National Institute of Health, Bethesda, Maryland (February 1993). This report is the culmination of the organized workshop. The main conclusion: a drastic acceleration in the present rate of progress is required for the chemistry community to be positioned to exploit fully the emerging class of Teraflop computers, even allowing for the significant work to date by the community in developing software for parallel architectures.

  16. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  17. Computationally Designed Oligomers for High Contrast Black Electrochromic Polymers

    Science.gov (United States)

    2017-05-05

    AFRL-AFOSR-VA-TR-2017-0097 Computationally Designed Oligomers for High Contrast Black Electrochromic Polymers Aimee Tomlinson University Of North...Black Electrochromic FA9550-15-1-0181 Polymers 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6.AUTHO~ 5d. PROJECT NUMBER AimeeL. T . Se. TASK...neraly black neutral state. Additionally, upon oxidation these polymers would have litte to no tailing form the near IR thereby guaranteeing nearly a I

  18. The high-throughput highway to computational materials design.

    Science.gov (United States)

    Curtarolo, Stefano; Hart, Gus L W; Nardelli, Marco Buongiorno; Mingo, Natalio; Sanvito, Stefano; Levy, Ohad

    2013-03-01

    High-throughput computational materials design is an emerging area of materials science. By combining advanced thermodynamic and electronic-structure methods with intelligent data mining and database construction, and exploiting the power of current supercomputer architectures, scientists generate, manage and analyse enormous data repositories for the discovery of novel materials. In this Review we provide a current snapshot of this rapidly evolving field, and highlight the challenges and opportunities that lie ahead.

  19. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  20. High-Throughput Neuroimaging-Genetics Computational Infrastructure

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2014-04-01

    Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.

  1. An integrated impact assessment and weighting methodology: evaluation of the environmental consequences of computer display technology substitution.

    Science.gov (United States)

    Zhou, Xiaoying; Schoenung, Julie M

    2007-04-01

    Computer display technology is currently in a state of transition, as the traditional technology of cathode ray tubes is being replaced by liquid crystal display flat-panel technology. Technology substitution and process innovation require the evaluation of the trade-offs among environmental impact, cost, and engineering performance attributes. General impact assessment methodologies, decision analysis and management tools, and optimization methods commonly used in engineering cannot efficiently address the issues needed for such evaluation. The conventional Life Cycle Assessment (LCA) process often generates results that can be subject to multiple interpretations, although the advantages of the LCA concept and framework obtain wide recognition. In the present work, the LCA concept is integrated with Quality Function Deployment (QFD), a popular industrial quality management tool, which is used as the framework for the development of our integrated model. The problem of weighting is addressed by using pairwise comparison of stakeholder preferences. Thus, this paper presents a new integrated analytical approach, Integrated Industrial Ecology Function Deployment (I2-EFD), to assess the environmental behavior of alternative technologies in correlation with their performance and economic characteristics. Computer display technology is used as the case study to further develop our methodology through the modification and integration of various quality management tools (e.g., process mapping, prioritization matrix) and statistical methods (e.g., multi-attribute analysis, cluster analysis). Life cycle thinking provides the foundation for our methodology, as we utilize a published LCA report, which stopped at the characterization step, as our starting point. Further, we evaluate the validity and feasibility of our methodology by considering uncertainty and conducting sensitivity analysis.

  2. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  3. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  4. Integrating reconfigurable hardware-based grid for high performance computing.

    Science.gov (United States)

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  5. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    Science.gov (United States)

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  6. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  7. Symbolic computation and its application to high energy physics

    CERN Document Server

    Hearn, A C

    1981-01-01

    Reviews the present state of the field of algebraic computation and its potential for problem solving in high energy physics and related areas. The author begins with a brief description of the available systems and examines the data objects which they consider. As an example of the facilities which these systems can offer, the author then considers the problem of analytic integration, since this is so fundamental to many of the calculational techniques used by high energy physicists. Finally, he studies the implications which the current developments in hardware technology hold for scientific problem solving. (20 refs).

  8. Airfoil noise computation use high-order schemes

    DEFF Research Database (Denmark)

    Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær

    2007-01-01

    ) finite difference schemes and optimized high-order compact finite difference schemes are applied for acoustic computation. Acoustic equations are derived using so-called splitting technique by separating the compressible NS equations into viscous (flow equation) and inviscid (acoustic equation) parts......High-order finite difference schemes with at least 4th-order spatial accuracy are used to simulate aerodynamically generated noise. The aeroacoustic solver with 4th-order up to 8th-order accuracy is implemented into the in-house flow solver, EllipSys2D/3D. Dispersion-Relation-Preserving (DRP...

  9. Rapid indirect trajectory optimization on highly parallel computing architectures

    Science.gov (United States)

    Antony, Thomas

    Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical

  10. The subthalamic nucleus keeps you high on emotion: behavioral consequences of its inactivation

    Directory of Open Access Journals (Sweden)

    Yann ePelloux

    2014-12-01

    Full Text Available The subthalamic nucleus (STN belongs to the basal ganglia and is the current target for the surgical treatment of neurological and psychiatric disorders such as Parkinson’s Disease (PD and obsessive compulsive disorders, but also a proposed site for the treatment of addiction. It is therefore very important to understand its functions in order to anticipate and prevent possible side-effects in the patients. Although the involvement of the STN is well documented in motor, cognitive and motivational processes, less is known regarding emotional processes. Here we have investigated the direct consequences of STN inactivation by excitotoxic lesions on emotional processing and reinforcement in the rat. We have used various behavioral procedures to assess affect for neutral, positive and negative reinforcers in STN lesioned rats. STN lesions reduced affective responses for positive (sweet solutions and negative (electric foot shock, Lithium Chloride-induced sickness reinforcers while they had no effect on responses for a more neutral reinforcer (novelty induced place preference. Furthermore, when given the choice between saccharine, a sweet but non caloric solution, and glucose, a more bland but caloric solution, in contrast to sham animals that preferred saccharine, STN lesioned animals preferred glucose over saccharine. Taken altogether these results reveal that STN plays a critical role in emotional processing. These results, in line with some clinical observations in PD patients subjected to STN surgery, suggest possible emotional side-effects of treatments targeting the STN. They also suggest that the increased motivation for sucrose previously reported cannot be due to increased pleasure, but could be responsible for the decreased motivation for cocaine reported after STN inactivation.

  11. High Risk Behaviors in Marine Mammals: Linking Behavioral Responses to Anthropogenic Disturbance to Biological Consequences

    Science.gov (United States)

    2015-09-30

    physiological costs and potential risks of three common responses by cetaceans to oceanic noise, 1) high-speed swimming , 2) elevated stroke frequencies, and...costs and risks of high speed behaviors Recognizing the importance of the mammalian flight response to novel stimuli, we are conducting the first...rate will be compared for swimming and diving specialists including dolphins, beluga whales, narwhals and foraging Weddell seals. 3. Determine

  12. Nutritional and Metabolic Consequences of Feeding High-Fiber Diets to Swine: A Review

    OpenAIRE

    Atta K. Agyekum; C. Martin Nyachoti

    2017-01-01

    At present, substantial amounts of low-cost, fibrous co-products are incorporated into pig diets to reduce the cost of raising swine. However, diets that are rich in fiber are of low nutritive value because pigs cannot degrade dietary fiber. In addition, high-fiber diets have been associated with reduced nutrient utilization and pig performance. However, recent reports are often contradictory and the negative effects of high-fiber diets are influenced by the fiber source, type, and inclusion ...

  13. The consequences of the use of concretes with high alumina cement in Madrid

    OpenAIRE

    Macías Hidalgo-Saavedra, Fernando

    1992-01-01

    This article presents, in the first place, the idea coming from the Madrid Town Council (Ayuntamiento de Madrid) to study the dwellings built with high alumina cement, comparing the case of Madrid with those qf other Spanish municipalities. The article also includes one particular case, that of the Vicente Calderón Stadium (Madrid) where we can see what was done after the detection of high alumina cement in prestressed precast beams of the inclined slabs which supported the tiers of the...

  14. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    Energy Technology Data Exchange (ETDEWEB)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  15. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  16. Scout: high-performance heterogeneous computing made simple

    Energy Technology Data Exchange (ETDEWEB)

    Jablin, James [Los Alamos National Laboratory; Mc Cormick, Patrick [Los Alamos National Laboratory; Herlihy, Maurice [BROWN UNIV.

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focus on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.

  17. Simple, Parallel, High-Performance Virtual Machines for Extreme Computations

    CERN Document Server

    Nejad, Bijan Chokoufe; Reuter, Jürgen

    2014-01-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new...

  18. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  19. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  20. Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

    Energy Technology Data Exchange (ETDEWEB)

    Emery, John M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Coffin, Peter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Robbins, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Field, Richard V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jeremy Yoo, Yung Suk [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kacher, Josh [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins with a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.

  1. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  2. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  3. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  4. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  5. Clinical relevance of computed tomography under emergency conditions. Diagnostic accuracy, therapeutical consequences; Klinische Relevanz der Computertomographie unter Notdienstbedingungen. Diagnostische Treffsicherheit, therapeutische Konsequenzen

    Energy Technology Data Exchange (ETDEWEB)

    Weber, C.; Jensen, F.; Wedegaertner, U.; Adam, G. [Universitaetskrankenhaus Eppendorf, Hamburg (Germany). Radiologisches Zentrum, Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2004-01-01

    Purpose: To evaluate the diagnostic accuracy and therapeutic consequences of computed tomography performed on an emergency basis in a primary care hospital. Material and Methods: In 418 patients, 463 computed tomographies (thorax, abdomen, pelvis, spine, aorta, neck and extremities) were performed within 12 months, providing 999 diagnoses. The computed tomography diagnoses were retrospectively evaluated and correlated to surgery and discharge diagnoses. Therapeutical consequence were analyzed and allocated to a time period < 36 h (urgent) and {>=} 36-72 h (elective). Average age was 49 (1-94) years (41% female and 59% male). Discharge diagnosis was defined as gold standard, provided that it was supported by clinical, blood chemical, diagnostic and possible surgical data. Results: In 176 of 999 diagnoses (18%), the diagnoses were classified as ''noncorrelatable''. Of the 823 correlated diagnoses, 431 were true positive, 14 false positive, 66 false negative and 312 true negative. Sensitivity, specificity and diagnostic accuracy of computed tomography was 87,96 and 90%. Computed tomography had therapeutic consequences (surgery, drainage, puncture, reposition, thrombolytic therapy, chemotherapy, bronchoscopy, endoscopy, percutaneous transluminal angioplasty, coiling etc.) in 57% and no direct therapeutic interventions in 43%. Computed tomography excluded the suspected diagnosis in 36% and resulted in a conservative therapeutic regiment in 7%. Surgery was performed on 134 of the 418 patients (32%) who underwent computed tomography, with the surgery urgent in 71 (17%) and elective in 63 (15%) of the 418 patient. (orig.) [German] Ziel: Bewertung der diagnostischen Treffsicherheit und therapeutischen Konsequenzen der Computertomographie unter Notdienstbedingungen in einem Krankenhaus der Maximalversorgung. Material und Methoden: Innerhalb des definierten Studienzeitraums (12 Monate) wurden bei 418 Patienten 463 Computertomographien (Thorax, Abdomen

  6. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  7. Experimental Consequences of Mottness in High-Temperature Copper-Oxide Superconductors

    Science.gov (United States)

    Chakraborty, Shiladitya

    2009-01-01

    It has been more than two decades since the copper-oxide high temperature superconductors were discovered. However, building a satisfactory theoretical framework to study these compounds still remains one of the major challenges in condensed matter physics. In addition to the mechanism of superconductivity, understanding the properties of the…

  8. Long-Term Psychosocial Consequences of Peer Victimization: From Elementary to High School

    Science.gov (United States)

    Smithyman, Thomas F.; Fireman, Gary D.; Asher, Yvonne

    2014-01-01

    Prior research has demonstrated that victims of peer victimization show reduced psychological adjustment, social adjustment, and physical well-being compared with nonvictims. However, little research has addressed whether this maladjustment continues over the long term. This study examined adjustment in 72 high school students who had participated…

  9. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  10. Nutritional and Metabolic Consequences of Feeding High-Fiber Diets to Swine: A Review

    Directory of Open Access Journals (Sweden)

    Atta K. Agyekum

    2017-10-01

    Full Text Available At present, substantial amounts of low-cost, fibrous co-products are incorporated into pig diets to reduce the cost of raising swine. However, diets that are rich in fiber are of low nutritive value because pigs cannot degrade dietary fiber. In addition, high-fiber diets have been associated with reduced nutrient utilization and pig performance. However, recent reports are often contradictory and the negative effects of high-fiber diets are influenced by the fiber source, type, and inclusion level. In addition, the effects of dietary fiber on pig growth and physiological responses are often confounded by the many analytical methods that are used to measure dietary fiber and its components. Several strategies have been employed to ameliorate the negative effects associated with the ingestion of high-fiber diets in pigs and to improve the nutritive value of such diets. Exogenous fiber-degrading enzymes are widely used to improve nutrient utilization and pig performance. However, the results of research reports have not been consistent and there is a need to elucidate the mode of action of exogenous enzymes on the metabolic and physiological responses in pigs that are fed high-fiber diets. On the other hand, dietary fiber is increasingly used as a means of promoting pig gut health and gestating sow welfare. In this review, dietary fiber and its effects on pig nutrition, gut physiology, and sow welfare are discussed. In addition, areas that need further research are suggested to gain more insight into dietary fiber and into the use of exogenous enzymes to improve the utilization of high-fiber diets by pigs.

  11. Marijuana and high-school students: the socio-psychological consequences

    OpenAIRE

    Todorovic, D.; Perunicic, I.; Marjanovic, S.

    2010-01-01

    The aim of this study was to investigate the relationship between abusing marijuana and characteristics of antisocial behavior and success in school. Punishment for improper behavior in school, proneness to fighting and severe illegal behavior was analyzed as antisocial variables. The sample consisted of 296 high-school students. Results: 1) 17.1 % of the participants consumed marijuana; 2) there are statistically significant positive correlations between using marijuana and being punished fo...

  12. Genetic consequences of forest fragmentation for a highly specialized arboreal mammal--the edible dormouse.

    Directory of Open Access Journals (Sweden)

    Joanna Fietz

    Full Text Available Habitat loss and fragmentation represent the most serious extinction threats for many species and have been demonstrated to be especially detrimental for mammals. Particularly, highly specialized species with low dispersal abilities will encounter a high risk of extinction in fragmented landscapes. Here we studied the edible dormouse (Glis glis, a small arboreal mammal that is distributed throughout Central Europe, where forests are mostly fragmented at different spatial scales. The aim of this study was to investigate the effect of habitat fragmentation on genetic population structures using the example of edible dormouse populations inhabiting forest fragments in south western Germany. We genotyped 380 adult individuals captured between 2001 and 2009 in four different forest fragments and one large continuous forest using 14 species-specific microsatellites. We hypothesised, that populations in small forest patches have a lower genetic diversity and are more isolated compared to populations living in continuous forests. In accordance with our expectations we found that dormice inhabiting forest fragments were isolated from each other. Furthermore, their genetic population structure was more unstable over the study period than in the large continuous forest. Even though we could not detect lower genetic variability within individuals inhabiting forest fragments, strong genetic isolation and an overall high risk to mate with close relatives might be precursors to a reduced genetic variability and the onset of inbreeding depression. Results of this study highlight that connectivity among habitat fragments can already be strongly hampered before genetic erosion within small and isolated populations becomes evident.

  13. Next-generation sequencing: big data meets high performance computing.

    Science.gov (United States)

    Schmidt, Bertil; Hildebrandt, Andreas

    2017-04-01

    The progress of next-generation sequencing has a major impact on medical and genomic research. This high-throughput technology can now produce billions of short DNA or RNA fragments in excess of a few terabytes of data in a single run. This leads to massive datasets used by a wide range of applications including personalized cancer treatment and precision medicine. In addition to the hugely increased throughput, the cost of using high-throughput technologies has been dramatically decreasing. A low sequencing cost of around US$1000 per genome has now rendered large population-scale projects feasible. However, to make effective use of the produced data, the design of big data algorithms and their efficient implementation on modern high performance computing systems is required. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A highly parallelized framework for computationally intensive MR data analysis.

    Science.gov (United States)

    Boubela, Roland N; Huf, Wolfgang; Kalcher, Klaudius; Sladky, Ronald; Filzmoser, Peter; Pezawas, Lukas; Kasper, Siegfried; Windischberger, Christian; Moser, Ewald

    2012-08-01

    The goal of this study was to develop a comprehensive magnetic resonance (MR) data analysis framework for handling very large datasets with user-friendly tools for parallelization and to provide an example implementation. Commonly used software packages (AFNI, FSL, SPM) were connected via a framework based on the free software environment R, with the possibility of using Nvidia CUDA GPU processing integrated for high-speed linear algebra operations in R. Three hundred single-subject datasets from the 1,000 Functional Connectomes project were used to demonstrate the capabilities of the framework. A framework for easy implementation of processing pipelines was developed and an R package for the example implementation of Fully Exploratory Network ICA was compiled. Test runs on data from 300 subjects demonstrated the computational advantages of a processing pipeline developed using the framework compared to non-parallelized processing, reducing computation time by a factor of 15. The feasibility of computationally intensive exploratory analyses allows broader access to the tools for discovery science.

  15. Chip-to-board interconnects for high-performance computing

    Science.gov (United States)

    Riester, Markus B. K.; Houbertz-Krauss, Ruth; Steenhusen, Sönke

    2013-02-01

    Super computing is reaching out to ExaFLOP processing speeds, creating fundamental challenges for the way that computing systems are designed and built. One governing topic is the reduction of power used for operating the system, and eliminating the excess heat generated from the system. Current thinking sees optical interconnects on most interconnect levels to be a feasible solution to many of the challenges, although there are still limitations to the technical solutions, in particular with regard to manufacturability. This paper explores drivers for enabling optical interconnect technologies to advance into the module and chip level. The introduction of optical links into High Performance Computing (HPC) could be an option to allow scaling the manufacturing technology to large volume manufacturing. This will drive the need for manufacturability of optical interconnects, giving rise to other challenges that add to the realization of this type of interconnection. This paper describes a solution that allows the creation of optical components on module level, integrating optical chips, laser diodes or PIN diodes as components much like the well known SMD components used for electrical components. The paper shows the main challenges and potential solutions to this challenge and proposes a fundamental paradigm shift in the manufacturing of 3-dimensional optical links for the level 1 interconnect (chip package).

  16. Optimizing high performance computing workflow for protein functional annotation.

    Science.gov (United States)

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  17. A Convolve-And-MErge Approach for Exact Computations on High-Performance Reconfigurable Computers

    Directory of Open Access Journals (Sweden)

    Esam El-Araby

    2012-01-01

    Full Text Available This work presents an approach for accelerating arbitrary-precision arithmetic on high-performance reconfigurable computers (HPRCs. Although faster and smaller, fixed-precision arithmetic has inherent rounding and overflow problems that can cause errors in scientific or engineering applications. This recurring phenomenon is usually referred to as numerical nonrobustness. Therefore, there is an increasing interest in the paradigm of exact computation, based on arbitrary-precision arithmetic. There are a number of libraries and/or languages supporting this paradigm, for example, the GNU multiprecision (GMP library. However, the performance of computations is significantly reduced in comparison to that of fixed-precision arithmetic. In order to reduce this performance gap, this paper investigates the acceleration of arbitrary-precision arithmetic on HPRCs. A Convolve-And-MErge approach is proposed, that implements virtual convolution schedules derived from the formal representation of the arbitrary-precision multiplication problem. Additionally, dynamic (nonlinear pipeline techniques are also exploited in order to achieve speedups ranging from 5x (addition to 9x (multiplication, while keeping resource usage of the reconfigurable device low, ranging from 11% to 19%.

  18. Fundamentals of Modeling, Data Assimilation, and High-performance Computing

    Science.gov (United States)

    Rood, Richard B.

    2005-01-01

    This lecture will introduce the concepts of modeling, data assimilation and high- performance computing as it relates to the study of atmospheric composition. The lecture will work from basic definitions and will strive to provide a framework for thinking about development and application of models and data assimilation systems. It will not provide technical or algorithmic information, leaving that to textbooks, technical reports, and ultimately scientific journals. References to a number of textbooks and papers will be provided as a gateway to the literature.

  19. Diffuse elastic wavefield within a simple crustal model. Some consequences for low and high frequencies

    Science.gov (United States)

    García-Jerez, Antonio; Luzón, Francisco; Sánchez-Sesma, Francisco J.; Lunedei, Enrico; Albarello, Dario; Santoyo, Miguel A.; Almendros, Javier

    2013-10-01

    reliability of usual assumptions regarding the wavefield composition in applications of the Diffuse Field Approach (DFA) to passive seismic prospecting is investigated. Starting from the more general formulation of the DFA for full wavefield (FW), the contribution of each wave to the horizontal- and vertical-component power spectra at surface are analyzed for a simple elastic waveguide representing the continental crust-upper mantle interface. Special attention is paid to their compositions at low and high frequencies, and the relative powers of each surface wave (SW) type are identified by means of a semianalytical analysis. If body waves are removed from the analysis, the high-frequency horizontal asymptote of the H/V spectral ratio decreases slightly (from 1.33 for FW to around 1.14 for SW) and shows dependence on both the Poisson's ratio of the crust and the S wave velocity contrast (while FW-H/V asymptote depends on the former only). Experimental tests in a local broadband network provide H/V curves compatible with any of these values in the band 0.2-1 Hz, approximately, supporting the applicability of the DFA approximation. Coexistence of multiple SW modes produces distortion in the amplitudes of vertical and radial component Aki's coherences, in comparison with the usual predictions based on fundamental modes. At high frequencies, this effect consists of a decrement by a constant scaling factor, being very remarkable in the radial case. Effects on the tangential coherence are severe, including a - π/4 phase shift, slower decay rate of amplitude versus frequency, and contribution of several velocities for large enough distances.

  20. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    2016-06-01

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is the inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.

  1. The consequences of high cigarette excise taxes for low-income smokers.

    Directory of Open Access Journals (Sweden)

    Matthew C Farrelly

    Full Text Available BACKGROUND: To illustrate the burden of high cigarette excise taxes on low-income smokers. METHODOLOGY/PRINCIPAL FINDINGS: Using data from the New York and national Adult Tobacco Surveys from 2010-2011, we estimated how smoking prevalence, daily cigarette consumption, and share of annual income spent on cigarettes vary by annual income (less than $30,000; $30,000-$59,999; and more than $60,000. The 2010-2011 sample includes 7,536 adults and 1,294 smokers from New York and 3,777 adults and 748 smokers nationally. Overall, smoking prevalence is lower in New York (16.1% than nationally (22.2% and is strongly associated with income in New York and nationally (P<.001. Smoking prevalence ranges from 12.2% to 33.7% nationally and from 10.1% to 24.3% from the highest to lowest income group. In 2010-2011, the lowest income group spent 23.6% of annual household income on cigarettes in New York (up from 11.6% in 2003-2004 and 14.2% nationally. Daily cigarette consumption is not related to income. CONCLUSIONS/SIGNIFICANCE: Although high cigarette taxes are an effective method for reducing cigarette smoking, they can impose a significant financial burden on low-income smokers.

  2. FitzPatrick Lecture: King George III and the porphyria myth - causes, consequences and re-evaluation of his mental illness with computer diagnostics.

    Science.gov (United States)

    Peters, Timothy

    2015-04-01

    Recent studies have shown that the claim that King George III suffered from acute porphyria is seriously at fault. This article explores some of the causes of this misdiagnosis and the consequences of the misleading claims, also reporting on the nature of the king's recurrent mental illness according to computer diagnostics. In addition, techniques of cognitive archaeology are used to investigate the nature of the king's final decade of mental illness, which resulted in the appointment of the Prince of Wales as Prince Regent. The results of this analysis confirm that the king suffered from bipolar disorder type I, with a final decade of dementia, due, in part, to the neurotoxicity of his recurrent episodes of acute mania. © 2015 Royal College of Physicians.

  3. RAPPORT: running scientific high-performance computing applications on the cloud

    National Research Council Canada - National Science Library

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-01

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software...

  4. RAPPORT: running scientific high-performance computing applications on the cloud

    National Research Council Canada - National Science Library

    Jeremy Cohen; Ioannis Filippis; Mark Woodbridge; Daniela Bauer; Neil Chue Hong; Mike Jackson; Sarah Butcher; David Colling; John Darlington; Brian Fuchs; Matt Harvey

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software...

  5. Preventive education for high-risk children: cognitive consequences of the Carolina Abecedarian Project.

    Science.gov (United States)

    Ramey, C T; Campbell, F A

    1984-03-01

    Longitudinal mental test scores for 54 educationally treated disadvantaged preschool children at high-risk for nonbiologically based mild mental retardation and 53 control children were compared. The educationally treated children were in a child-centered prevention-oriented intervention program delivered in a daycare setting from infancy to age 5. Language, cognitive, perceptual-motor, and social development were stressed. Children were examined with age-appropriate tests of development at 6, 12, 18, 24, 30, 42, 48, and 54 months of age. Beginning at 18 months, and on every test occasion thereafter, educationally treated children significantly outscored control group children on mental tests; treated children consistently scored at the national average whereas control children's scores declined from the average level at 12 months to below average at 18 months and thereafter. Implications of the results for early intervention were discussed.

  6. High CO₂ and marine animal behaviour: potential mechanisms and ecological consequences.

    Science.gov (United States)

    Briffa, Mark; de la Haye, Kate; Munday, Philip L

    2012-08-01

    Exposure to pollution and environmental change can alter the behaviour of aquatic animals and here we review recent evidence that exposure to elevated CO₂ and reduced sea water pH alters the behaviour of tropical reef fish and hermit crabs. Three main routes through which behaviour might be altered are discussed; elevated metabolic load, 'info-disruption' and avoidance behaviour away from polluted locations. There is clear experimental evidence that exposure to high CO₂ disrupts the ability to find settlement sites and shelters, the ability to detect predators and the ability to detect prey and food. In marine vertebrates and marine crustaceans behavioural change appears to occur via info-disruption. In hermit crabs and other crustaceans impairment of performance capacities might also play a role. We discuss the implications for such behavioural changes in terms of potential impacts at the levels of population health and ecosystem services, and consider future directions for research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Distance phenomena in high-dimensional chemical descriptor spaces: consequences for similarity-based approaches.

    Science.gov (United States)

    Rupp, Matthias; Schneider, Petra; Schneider, Gisbert

    2009-11-15

    Measuring the (dis)similarity of molecules is important for many cheminformatics applications like compound ranking, clustering, and property prediction. In this work, we focus on real-valued vector representations of molecules (as opposed to the binary spaces of fingerprints). We demonstrate the influence which the choice of (dis)similarity measure can have on results, and provide recommendations for such choices. We review the mathematical concepts used to measure (dis)similarity in vector spaces, namely norms, metrics, inner products, and, similarity coefficients, as well as the relationships between them, employing (dis)similarity measures commonly used in cheminformatics as examples. We present several phenomena (empty space phenomenon, sphere volume related phenomena, distance concentration) in high-dimensional descriptor spaces which are not encountered in two and three dimensions. These phenomena are theoretically characterized and illustrated on both artificial and real (bioactivity) data. 2009 Wiley Periodicals, Inc.

  8. Evidence and Consequence of a Highly Adapted Clonal Haplotype within the Australian Ascochyta rabiei Population

    Directory of Open Access Journals (Sweden)

    Yasir Mehmood

    2017-06-01

    Full Text Available The Australian Ascochyta rabiei (Pass. Labr. (syn. Phoma rabiei population has low genotypic diversity with only one mating type detected to date, potentially precluding substantial evolution through recombination. However, a large diversity in aggressiveness exists. In an effort to better understand the risk from selective adaptation to currently used resistance sources and chemical control strategies, the population was examined in detail. For this, a total of 598 isolates were quasi-hierarchically sampled between 2013 and 2015 across all major Australian chickpea growing regions and commonly grown host genotypes. Although a large number of haplotypes were identified (66 through short sequence repeat (SSR genotyping, overall low gene diversity (Hexp = 0.066 and genotypic diversity (D = 0.57 was detected. Almost 70% of the isolates assessed were of a single dominant haplotype (ARH01. Disease screening on a differential host set, including three commonly deployed resistance sources, revealed distinct aggressiveness among the isolates, with 17% of all isolates identified as highly aggressive. Almost 75% of these were of the ARH01 haplotype. A similar pattern was observed at the host level, with 46% of all isolates collected from the commonly grown host genotype Genesis090 (classified as “resistant” during the term of collection identified as highly aggressive. Of these, 63% belonged to the ARH01 haplotype. In conclusion, the ARH01 haplotype represents a significant risk to the Australian chickpea industry, being not only widely adapted to the diverse agro-geographical environments of the Australian chickpea growing regions, but also containing a disproportionately large number of aggressive isolates, indicating fitness to survive and replicate on the best resistance sources in the Australian germplasm.

  9. High-throughput landslide modelling using computational grids

    Science.gov (United States)

    Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.

    2012-04-01

    Landslides are an increasing problem in developing countries. Multiple landslides can be triggered by heavy rainfall resulting in loss of life, homes and critical infrastructure. Through computer simulation of individual slopes it is possible to predict the causes, timing and magnitude of landslides and estimate the potential physical impact. Geographical scientists at the University of Bristol have developed software that integrates a physically-based slope hydrology and stability model (CHASM) with an econometric model (QUESTA) in order to predict landslide risk over time. These models allow multiple scenarios to be evaluated for each slope, accounting for data uncertainties, different engineering interventions, risk management approaches and rainfall patterns. Individual scenarios can be computationally intensive, however each scenario is independent and so multiple scenarios can be executed in parallel. As more simulations are carried out the overhead involved in managing input and output data becomes significant. This is a greater problem if multiple slopes are considered concurrently, as is required both for landslide research and for effective disaster planning at national levels. There are two critical factors in this context: generated data volumes can be in the order of tens of terabytes, and greater numbers of simulations result in long total runtimes. Users of such models, in both the research community and in developing countries, need to develop a means for handling the generation and submission of landside modelling experiments, and the storage and analysis of the resulting datasets. Additionally, governments in developing countries typically lack the necessary computing resources and infrastructure. Consequently, knowledge that could be gained by aggregating simulation results from many different scenarios across many different slopes remains hidden within the data. To address these data and workload management issues, University of Bristol particle

  10. Estimating uncertainties from high resolution simulations of extreme wind storms and consequences for impacts

    Directory of Open Access Journals (Sweden)

    Tobias Pardowitz

    2016-10-01

    Full Text Available A simple method is presented designed to assess uncertainties from dynamical downscaling of regional high impact weather. The approach makes use of the fact that the choice of the simulation domain for the regional model is to a certain degree arbitrary. Thus, a small ensemble of equally valid simulations can be produced from the same driving model output by shifting the domain by a few of grid cells. Applying the approach to extra-tropical storm systems the regional simulations differ with respect to the exact location and severity of extreme wind speeds. Based on an integrated storm severity measure, the individual ensemble members are found to vary by more than 25 % from the ensemble mean in the majority of episodes considered. Estimates of insured losses based on individual regional simulations and integrated over Germany even differ by more than 50 % from the ensemble mean in most cases. Based on a set of intense storm episodes, a quantification of winter storm losses under recent and future climate is made. Using this domain shift ensemble approach, uncertainty ranges are derived representing the uncertainty inherent to the used downscaling method.

  11. Quantitative analysis of cholesteatoma using high resolution computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, Shigeru; Yamasoba, Tatsuya (Kameda General Hospital, Chiba (Japan)); Iinuma, Toshitaka

    1992-05-01

    Seventy-three cases of adult cholesteatoma, including 52 cases of pars flaccida type cholesteatoma and 21 of pars tensa type cholesteatoma, were examined using high resolution computed tomography, in both axial (lateral semicircular canal plane) and coronal sections (cochlear, vestibular and antral plane). These cases were classified into two subtypes according to the presence of extension of cholesteatoma into the antrum. Sixty cases of chronic otitis media with central perforation (COM) were also examined as controls. Various locations of the middle ear cavity were measured in terms of size in comparison with pars flaccida type cholesteatoma, pars tensa type cholesteatoma and COM. The width of the attic was significantly larger in both pars flaccida type and pars tensa type cholesteatoma than in COM. With pars flaccida type cholesteatoma there was a significantly larger distance between the malleus and lateral wall of the attic than with COM. In contrast, the distance between the malleus and medial wall of the attic was significantly larger with pars tensa type cholesteatoma than with COM. With cholesteatoma extending into the antrum, regardless of the type of cholesteatoma, there were significantly larger distances than with COM at the following sites: the width and height of the aditus ad antrum, and the width, height and anterior-posterior diameter of the antrum. However, these distances were not significantly different between cholesteatoma without extension into the antrum and COM. The hitherto demonstrated qualitative impressions of bone destruction in cholesteatoma were quantitatively verified in detail using high resolution computed tomography. (author).

  12. Matrix element method for high performance computing platforms

    Science.gov (United States)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  13. Mass storage: The key to success in high performance computing

    Science.gov (United States)

    Lee, Richard R.

    1993-01-01

    There are numerous High Performance Computing & Communications Initiatives in the world today. All are determined to help solve some 'Grand Challenges' type of problem, but each appears to be dominated by the pursuit of higher and higher levels of CPU performance and interconnection bandwidth as the approach to success, without any regard to the impact of Mass Storage. My colleagues and I at Data Storage Technologies believe that all will have their performance against their goals ultimately measured by their ability to efficiently store and retrieve the 'deluge of data' created by end-users who will be using these systems to solve Scientific Grand Challenges problems, and that the issue of Mass Storage will become then the determinant of success or failure in achieving each projects goals. In today's world of High Performance Computing and Communications (HPCC), the critical path to success in solving problems can only be traveled by designing and implementing Mass Storage Systems capable of storing and manipulating the truly 'massive' amounts of data associated with solving these challenges. Within my presentation I will explore this critical issue and hypothesize solutions to this problem.

  14. Behavioral and cellular consequences of high-electrode count Utah Arrays chronically implanted in rat sciatic nerve

    Science.gov (United States)

    Wark, H. A. C.; Mathews, K. S.; Normann, R. A.; Fernandez, E.

    2014-08-01

    Objective. Before peripheral nerve electrodes can be used for the restoration of sensory and motor functions in patients with neurological disorders, the behavioral and histological consequences of these devices must be investigated. These indices of biocompatibility can be defined in terms of desired functional outcomes; for example, a device may be considered for use as a therapeutic intervention if the implanted subject retains functional neurons post-implantation even in the presence of a foreign body response. The consequences of an indwelling device may remain localized to cellular responses at the device-tissue interface, such as fibrotic encapsulation of the device, or they may affect the animal more globally, such as impacting behavioral or sensorimotor functions. The objective of this study was to investigate the overall consequences of implantation of high-electrode count intrafascicular peripheral nerve arrays, High Density Utah Slanted Electrode Arrays (HD-USEAs; 25 electrodes mm-2). Approach. HD-USEAs were implanted in rat sciatic nerves for one and two month periods. We monitored wheel running, noxious sensory paw withdrawal reflexes, footprints, nerve morphology and macrophage presence at the tissue-device interface. In addition, we used a novel approach to contain the arrays in actively behaving animals that consisted of an organic nerve wrap. A total of 500 electrodes were implanted across all ten animals. Main results. The results demonstrated that chronic implantation (⩽8 weeks) of HD-USEAs into peripheral nerves can evoke behavioral deficits that recover over time. Morphology of the nerve distal to the implantation site showed variable signs of nerve fiber degeneration and regeneration. Cytology adjacent to the device-tissue interface also showed a variable response, with some electrodes having many macrophages surrounding the electrodes, while other electrodes had few or no macrophages present. This variability was also seen along the length

  15. QSPIN: A High Level Java API for Quantum Computing Experimentation

    Science.gov (United States)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  16. Transcriptional consequence and impaired gametogenesis with high-grade aneuploidy in Arabidopsis thaliana.

    Directory of Open Access Journals (Sweden)

    Kuan-Lin Lo

    Full Text Available Aneuploidy features a numerical chromosome variant that the number of chromosomes in the nucleus of a cell is not an exact multiple of the haploid number, which may have an impact on morphology and gene expression. Here we report a tertiary trisomy uncovered by characterizing a T-DNA insertion mutant (aur2-1/+ in the Arabidopsis (Arabidopsis thaliana AURORA2 locus. Whole-genome analysis with DNA tiling arrays revealed a chromosomal translocation linked to the aur2-1 allele, which collectively accounted for a tertiary trisomy 2. Morphologic, cytogenetic and genetic analyses of aur2-1 progeny showed impaired male and female gametogenesis to various degrees and a tight association of the aur2-1 allele with the tertiary trisomy that was preferentially inherited. Transcriptome analysis showed overlapping and distinct gene expression profiles between primary and tertiary trisomy 2 plants, particularly genes involved in response to stress and various types of external and internal stimuli. Additionally, transcriptome and gene ontology analyses revealed an overrepresentation of nuclear-encoded organelle-related genes functionally involved in plastids, mitochondria and peroxisomes that were differentially expressed in at least three if not all Arabidopsis trisomics. These observations support a previous hypothesis that aneuploid cells have higher energy requirement to overcome the detrimental effects of an unbalanced genome. Moreover, our findings extend the knowledge of the complex nature of the T-DNA insertion event influencing plant genomic integrity by creating high-grade trisomy. Finally, gene expression profiling results provide useful information for future research to compare primary and tertiary trisomics for the effects of aneuploidy on plant cell physiology.

  17. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false High performance computers: Post... ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification... certain computers to destinations in Computer Tier 3, see § 740.7(d) for a list of these destinations...

  18. Effects of strength training on muscle fiber types and size; consequences for athletes training for high-intensity sport

    DEFF Research Database (Denmark)

    Andersen, J L; Aagaard, P

    2010-01-01

    Training toward improving performance in sports involving high intense exercise can and is done in many different ways based on a mixture of tradition in the specific sport, coaches' experience and scientific recommendations. Strength training is a form of training that now-a-days have found its...... way into almost all sports in which high intense work is conducted. In this review we will focus on a few selected aspects and consequences of strength training; namely what effects do strength training have of muscle fiber type composition, and how may these effects change the contractile properties...... of the muscle and finally how will this affect the performance of the athlete. In addition, the review will deal with muscle hypertrophy and how it develops with strength training. Overall, it is not the purpose of this review to give a comprehensive up-date of the area, but to pin-point a few issues from which...

  19. Economic consequences of mastitis and withdrawal of milk with high somatic cell count in Swedish dairy herds

    DEFF Research Database (Denmark)

    Nielsen, C; Østergaard, Søren; Emanuelson, U

    2010-01-01

    Herd, was used to study the effects of mastitis in a herd with 150 cows. Results given the initial incidence of mastitis (32 and 33 clinical and subclinical cases per 100 cow-years, respectively) were studied, together with the consequences of reducing or increasing the incidence of mastitis by 50%, modelling...... no clinical mastitis (CM) while keeping the incidence of subclinical mastitis (SCM) constant and vice versa. Six different strategies to withdraw milk with high SCC were compared. The decision to withdraw milk was based on herd-level information in three scenarios: withdrawal was initiated when the predicted......The main aim was to assess the impact of mastitis on technical and economic results of a dairy herd under current Swedish farming conditions. The second aim was to investigate the effects obtained by withdrawing milk with high somatic cell count (SCC). A dynamic and stochastic simulation model, Sim...

  20. Effects of strength training on muscle fiber types and size; consequences for athletes training for high-intensity sport

    DEFF Research Database (Denmark)

    Andersen, J L; Aagaard, P

    2010-01-01

    way into almost all sports in which high intense work is conducted. In this review we will focus on a few selected aspects and consequences of strength training; namely what effects do strength training have of muscle fiber type composition, and how may these effects change the contractile properties...... of the muscle and finally how will this affect the performance of the athlete. In addition, the review will deal with muscle hypertrophy and how it develops with strength training. Overall, it is not the purpose of this review to give a comprehensive up-date of the area, but to pin-point a few issues from which......Training toward improving performance in sports involving high intense exercise can and is done in many different ways based on a mixture of tradition in the specific sport, coaches' experience and scientific recommendations. Strength training is a form of training that now-a-days have found its...

  1. High pressure humidification columns: Design equations, algorithm, and computer code

    Energy Technology Data Exchange (ETDEWEB)

    Enick, R.M. [Pittsburgh Univ., PA (United States). Dept. of Chemical and Petroleum Engineering; Klara, S.M. [USDOE Pittsburgh Energy Technology Center, PA (United States); Marano, J.J. [Burns and Roe Services Corp., Pittsburgh, PA (United States)

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  2. Proceedings of the workshop on high resolution computed microtomography (CMT)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The purpose of the workshop was to determine the status of the field, to define instrumental and computational requirements, and to establish minimum specifications required by possible users. The most important message sent by implementers was the remainder that CMT is a tool. It solves a wide spectrum of scientific problems and is complementary to other microscopy techniques, with certain important advantages that the other methods do not have. High-resolution CMT can be used non-invasively and non-destructively to study a variety of hierarchical three-dimensional microstructures, which in turn control body function. X-ray computed microtomography can also be used at the frontiers of physics, in the study of granular systems, for example. With high-resolution CMT, for example, three-dimensional pore geometries and topologies of soils and rocks can be obtained readily and implemented directly in transport models. In turn, these geometries can be used to calculate fundamental physical properties, such as permeability and electrical conductivity, from first principles. Clearly, use of the high-resolution CMT technique will contribute tremendously to the advancement of current R and D technologies in the production, transport, storage, and utilization of oil and natural gas. It can also be applied to problems related to environmental pollution, particularly to spilling and seepage of hazardous chemicals into the Earth's subsurface. Applications to energy and environmental problems will be far-ranging and may soon extend to disciplines such as materials science--where the method can be used in the manufacture of porous ceramics, filament-resin composites, and microelectronics components--and to biomedicine, where it could be used to design biocompatible materials such as artificial bones, contact lenses, or medication-releasing implants. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  3. Computational Modeling of Culture's Consequences

    NARCIS (Netherlands)

    Hofstede, G.J.; Jonker, C.M.; Verwaart, T.

    2010-01-01

    This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture,

  4. Caffeinated cocktails: energy drink consumption, high-risk drinking, and alcohol-related consequences among college students.

    Science.gov (United States)

    O'Brien, Mary Claire; McCoy, Thomas P; Rhodes, Scott D; Wagoner, Ashley; Wolfson, Mark

    2008-05-01

    The consumption of alcohol mixed with energy drinks (AmED) is popular on college campuses in the United States. Limited research suggests that energy drink consumption lessens subjective intoxication in persons who also have consumed alcohol. This study examines the relationship between energy drink use, high-risk drinking behavior, and alcohol-related consequences. In Fall 2006, a Web-based survey was conducted in a stratified random sample of 4,271 college students from 10 universities in North Carolina. A total of 697 students (24% of past 30-day drinkers) reported consuming AmED in the past 30 days. Students who were male, white, intramural athletes, fraternity or sorority members or pledges, and younger were significantly more likely to consume AmED. In multivariable analyses, consumption of AmED was associated with increased heavy episodic drinking (6.4 days vs. 3.4 days on average; p Students who reported consuming AmED had significantly higher prevalence of alcohol-related consequences, including being taken advantage of sexually, taking advantage of another sexually, riding with an intoxicated driver, being physically hurt or injured, and requiring medical treatment (p student's reported typical alcohol consumption (interaction p = 0.027). Almost one-quarter of college student current drinkers reported mixing alcohol with energy drinks. These students are at increased risk for alcohol-related consequences, even after adjusting for the amount of alcohol consumed. Further research is necessary to understand this association and to develop targeted interventions to reduce risk.

  5. Computational Fluid Dynamics Analysis of High Injection Pressure Blended Biodiesel

    Science.gov (United States)

    Khalid, Amir; Jaat, Norrizam; Faisal Hushim, Mohd; Manshoor, Bukhari; Zaman, Izzuddin; Sapit, Azwan; Razali, Azahari

    2017-08-01

    Biodiesel have great potential for substitution with petrol fuel for the purpose of achieving clean energy production and emission reduction. Among the methods that can control the combustion properties, controlling of the fuel injection conditions is one of the successful methods. The purpose of this study is to investigate the effect of high injection pressure of biodiesel blends on spray characteristics using Computational Fluid Dynamics (CFD). Injection pressure was observed at 220 MPa, 250 MPa and 280 MPa. The ambient temperature was kept held at 1050 K and ambient pressure 8 MPa in order to simulate the effect of boost pressure or turbo charger during combustion process. Computational Fluid Dynamics were used to investigate the spray characteristics of biodiesel blends such as spray penetration length, spray angle and mixture formation of fuel-air mixing. The results shows that increases of injection pressure, wider spray angle is produced by biodiesel blends and diesel fuel. The injection pressure strongly affects the mixture formation, characteristics of fuel spray, longer spray penetration length thus promotes the fuel and air mixing.

  6. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  7. The high-rate data challenge: computing for the CBM experiment

    Science.gov (United States)

    Friese, V.; CBM Collaboration

    2017-10-01

    The Compressed Baryonic Matter experiment (CBM) is a next-generation heavy-ion experiment to be operated at the FAIR facility, currently under construction in Darmstadt, Germany. A key feature of CBM is very high interaction rate, exceeding those of contemporary nuclear collision experiments by several orders of magnitude. Such interaction rates forbid a conventional, hardware-triggered readout; instead, experiment data will be freely streaming from self-triggered front-end electronics. In order to reduce the huge raw data volume to a recordable rate, data will be selected exclusively on CPU, which necessitates partial event reconstruction in real-time. Consequently, the traditional segregation of online and offline software vanishes; an integrated on- and offline data processing concept is called for. In this paper, we will report on concepts and developments for computing for CBM as well as on the status of preparations for its first physics run.

  8. STATE-SPACE SOLUTIONS TO THE DYNAMIC MAGNETOENCEPHALOGRAPHY INVERSE PROBLEM USING HIGH PERFORMANCE COMPUTING.

    Science.gov (United States)

    Long, Christopher J; Purdon, Patrick L; Temereanca, Simona; Desai, Neil U; Hämäläinen, Matti S; Brown, Emery N

    2011-06-01

    Determining the magnitude and location of neural sources within the brain that are responsible for generating magnetoencephalography (MEG) signals measured on the surface of the head is a challenging problem in functional neuroimaging. The number of potential sources within the brain exceeds by an order of magnitude the number of recording sites. As a consequence, the estimates for the magnitude and location of the neural sources will be ill-conditioned because of the underdetermined nature of the problem. One well-known technique designed to address this imbalance is the minimum norm estimator (MNE). This approach imposes an L(2) regularization constraint that serves to stabilize and condition the source parameter estimates. However, these classes of regularizer are static in time and do not consider the temporal constraints inherent to the biophysics of the MEG experiment. In this paper we propose a dynamic state-space model that accounts for both spatial and temporal correlations within and across candidate intra-cortical sources. In our model, the observation model is derived from the steady-state solution to Maxwell's equations while the latent model representing neural dynamics is given by a random walk process. We show that the Kalman filter (KF) and the Kalman smoother [also known as the fixed-interval smoother (FIS)] may be used to solve the ensuing high-dimensional state-estimation problem. Using a well-known relationship between Bayesian estimation and Kalman filtering, we show that the MNE estimates carry a significant zero bias. Calculating these high-dimensional state estimates is a computationally challenging task that requires High Performance Computing (HPC) resources. To this end, we employ the NSF Teragrid Supercomputing Network to compute the source estimates. We demonstrate improvement in performance of the state-space algorithm relative to MNE in analyses of simulated and actual somatosensory MEG experiments. Our findings establish the

  9. Power/energy use cases for high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  10. High-performance Computing Applied to Semantic Databases

    Energy Technology Data Exchange (ETDEWEB)

    Goodman, Eric L.; Jimenez, Edward; Mizell, David W.; al-Saffar, Sinan; Adolf, Robert D.; Haglin, David J.

    2011-06-02

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  11. Trends in high-performance computing for engineering calculations.

    Science.gov (United States)

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  12. Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, L D

    2004-10-26

    This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.

  13. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  14. Multimodal Information Presentation for High-Load Human Computer Interaction

    OpenAIRE

    Cao, Y

    2011-01-01

    This dissertation addresses multimodal information presentation in human computer interaction. Information presentation refers to the manner in which computer systems/interfaces present information to human users. More specifically, the focus of our work is not on which information to present, but on how to present it, such as which modalities to use, how to spatially distribute items, et cetera. The notion ``computer'' is not limited to personal computers in their various forms. It also incl...

  15. High-Performance Special-Purpose Computers in Science

    OpenAIRE

    Fukushige, Toshiyuki; Hut, Piet; Makino, Jun

    1998-01-01

    The next decade will be an exciting time for computational physicists. After 50 years of being forced to use standardized commercial equipment, it will finally become relatively straightforward to adapt one's computing tools to one's own needs. The breakthrough that opens this new era is the now wide-spread availability of programmable chips that allow virtually every computational scientist to design his or her own special-purpose computer.

  16. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  17. High-throughput computational and experimental techniques in structural genomics.

    Science.gov (United States)

    Chance, Mark R; Fiser, Andras; Sali, Andrej; Pieper, Ursula; Eswar, Narayanan; Xu, Guiping; Fajardo, J Eduardo; Radhakannan, Thirumuruhan; Marinkovic, Nebojsa

    2004-10-01

    Structural genomics has as its goal the provision of structural information for all possible ORF sequences through a combination of experimental and computational approaches. The access to genome sequences and cloning resources from an ever-widening array of organisms is driving high-throughput structural studies by the New York Structural Genomics Research Consortium. In this report, we outline the progress of the Consortium in establishing its pipeline for structural genomics, and some of the experimental and bioinformatics efforts leading to structural annotation of proteins. The Consortium has established a pipeline for structural biology studies, automated modeling of ORF sequences using solved (template) structures, and a novel high-throughput approach (metallomics) to examining the metal binding to purified protein targets. The Consortium has so far produced 493 purified proteins from >1077 expression vectors. A total of 95 have resulted in crystal structures, and 81 are deposited in the Protein Data Bank (PDB). Comparative modeling of these structures has generated >40,000 structural models. We also initiated a high-throughput metal analysis of the purified proteins; this has determined that 10%-15% of the targets contain a stoichiometric structural or catalytic transition metal atom. The progress of the structural genomics centers in the U.S. and around the world suggests that the goal of providing useful structural information on most all ORF domains will be realized. This projected resource will provide structural biology information important to understanding the function of most proteins of the cell.

  18. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  19. Computer Science researchers explore virtualization potential for high-end computing

    OpenAIRE

    Daniilidi, Christina

    2007-01-01

    Dimitrios Nikolopoulos, associate professor of computer science, and Godmar Back, assistant professor of computer science, both at Virginia Tech, have received a National Science Foundation (NSF) - Computer Science Research (CSR) grant of $300,000 for their Virtualization Technologies for Application-Specific Operating Systems (VT ASOS) project.

  20. Use of Cloud Computing to Calibrate a Highly Parameterized Model

    Science.gov (United States)

    Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.

    2012-12-01

    We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  2. Consequences of a Maternal High-Fat Diet and Late Gestation Diabetes on the Developing Rat Lung.

    Science.gov (United States)

    Baack, Michelle L; Forred, Benjamin J; Larsen, Tricia D; Jensen, Danielle N; Wachal, Angela L; Khan, Muhammad Ali; Vitiello, Peter F

    2016-01-01

    Infants born to diabetic or obese mothers are at risk of respiratory distress and persistent pulmonary hypertension of the newborn (PPHN), conceivably through fuel-mediated pathogenic mechanisms. Prior research and preventative measures focus on controlling maternal hyperglycemia, but growing evidence suggests a role for additional circulating fuels including lipids. Little is known about the individual or additive effects of a maternal high-fat diet on fetal lung development. The objective of this study was to determine the effects of a maternal high-fat diet, alone and alongside late-gestation diabetes, on lung alveologenesis and vasculogenesis, as well as to ascertain if consequences persist beyond the perinatal period. A rat model was used to study lung development in offspring from control, diabetes-exposed, high-fat diet-exposed and combination-exposed pregnancies via morphometric, histologic (alveolarization and vasculogenesis) and physiologic (echocardiography, pulmonary function) analyses at birth and 3 weeks of age. Outcomes were interrogated for diet, diabetes and interaction effect using ANOVA with significance set at p≤0.05. Findings prompted additional mechanistic inquiry of key molecular pathways. Offspring exposed to maternal diabetes or high-fat diet, alone and in combination, had smaller lungs and larger hearts at birth. High-fat diet-exposed, but not diabetes-exposed offspring, had a higher perinatal death rate and echocardiographic evidence of PPHN at birth. Alveolar mean linear intercept, septal thickness, and airspace area (D2) were not significantly different between the groups; however, markers of lung maturity were. Both diabetes-exposed and diet-exposed offspring expressed more T1α protein, a marker of type I cells. Diet-exposed newborn pups expressed less surfactant protein B and had fewer pulmonary vessels enumerated. Mechanistic inquiry revealed alterations in AKT activation, higher endothelin-1 expression, and an impaired Txnip

  3. Actual directions in study of ecological consequences of a highly toxic 1,1-dimethylhydrazine-based rocket fuel spills

    Directory of Open Access Journals (Sweden)

    Bulat Kenessov

    2012-05-01

    Full Text Available The paper represents a review of the actual directions in study of ecological consequences of highly toxic 1,1-dimethylhydrazine-based rocket fuel spills. Recent results on study of processes of transformation of 1,1-dimethylhydrazine, identification of its main metabolites and development of analytical methods for their determination are generalized. Modern analytical methods of determination of 1,1-dimethylhydrazine and its transformation products in environmental samples are characterized. It is shown that in recent years, through the use of most modern methods of physical chemical analysis and sample preparation, works in this direction made significant progress and contributed to the development of studies in adjacent areas. A character of distribution of transformation products in soils of fall places of first stages of rocket-carriers is described and the available methods for their remediation are characterized.

  4. The Listener Sets the Tone: High-Quality Listening Increases Attitude Clarity and Behavior-Intention Consequences.

    Science.gov (United States)

    Itzchakov, Guy; DeMarree, Kenneth G; Kluger, Avraham N; Turjeman-Levi, Yaara

    2018-01-01

    We examined how merely sharing attitudes with a good listener shapes speakers' attitudes. We predicted that high-quality (i.e., empathic, attentive, and nonjudgmental) listening reduces speakers' social anxiety and leads them to delve deeper into their attitude-relevant knowledge (greater self-awareness). This, subsequently, differentially affects two components of speaker's attitude certainty by increasing attitude clarity, but not attitude correctness. In addition, we predicted that this increased clarity is followed by increased attitude- expression intentions, but not attitude- persuasion intentions. We obtained consistent support for our hypotheses across five experiments (including one preregistered study), manipulating listening behavior in a variety of ways. This is the first evidence that an interpersonal variable, unrelated to the attitude itself, can affect attitude clarity and its consequences.

  5. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    Science.gov (United States)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  6. Verification of computer system PROLOG - software tool for rapid assessments of consequences of short-term radioactive releases to the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Kiselev, Alexey A.; Krylov, Alexey L.; Bogatov, Sergey A. [Nuclear Safety Institute (IBRAE), Bolshaya Tulskaya st. 52, 115191, Moscow (Russian Federation)

    2014-07-01

    In case of nuclear and radiation accidents emergency response authorities require a tool for rapid assessments of possible consequences. One of the most significant problems is lack of data on initial state of an accident. The lack can be especially critical in case the accident occurred in a location that was not thoroughly studied beforehand (during transportation of radioactive materials for example). One of possible solutions is the hybrid method when a model that enables rapid assessments with the use of reasonable minimum of input data is used conjointly with an observed data that can be collected shortly after accidents. The model is used to estimate parameters of the source and uncertain meteorological parameters on the base of some observed data. For example, field of fallout density can be observed and measured within hours after an accident. After that the same model with the use of estimated parameters is used to assess doses and necessity of recommended and mandatory countermeasures. The computer system PROLOG was designed to solve the problem. It is based on the widely used Gaussian model. The standard Gaussian model is supplemented with several sub-models that allow to take into account: polydisperse aerosols, aerodynamic shade from buildings in the vicinity of the place of accident, terrain orography, initial size of the radioactive cloud, effective height of the release, influence of countermeasures on the doses of radioactive exposure of humans. It uses modern GIS technologies and can use web map services. To verify ability of PROLOG to solve the problem it is necessary to test its ability to assess necessary parameters of real accidents in the past. Verification of the computer system on the data of Chazhma Bay accident (Russian Far East, 1985) was published previously. In this work verification was implemented on the base of observed contamination from the Kyshtym disaster (PA Mayak, 1957) and the Tomsk accident (1993). Observations of Sr-90

  7. Evolution of plastic anisotropy for high-strain-rate computations

    Energy Technology Data Exchange (ETDEWEB)

    Schiferl, S.K.; Maudlin, P.J.

    1994-12-01

    A model for anisotropic material strength, and for changes in the anisotropy due to plastic strain, is described. This model has been developed for use in high-rate, explicit, Lagrangian multidimensional continuum-mechanics codes. The model handles anisotropies in single-phase materials, in particular the anisotropies due to crystallographic texture--preferred orientations of the single-crystal grains. Textural anisotropies, and the changes in these anisotropies, depend overwhelmingly no the crystal structure of the material and on the deformation history. The changes, particularly for a complex deformations, are not amenable to simple analytical forms. To handle this problem, the material model described here includes a texture code, or micromechanical calculation, coupled to a continuum code. The texture code updates grain orientations as a function of tensor plastic strain, and calculates the yield strength in different directions. A yield function is fitted to these yield points. For each computational cell in the continuum simulation, the texture code tracks a particular set of grain orientations. The orientations will change due to the tensor strain history, and the yield function will change accordingly. Hence, the continuum code supplies a tensor strain to the texture code, and the texture code supplies an updated yield function to the continuum code. Since significant texture changes require relatively large strains--typically, a few percent or more--the texture code is not called very often, and the increase in computer time is not excessive. The model was implemented, using a finite-element continuum code and a texture code specialized for hexagonal-close-packed crystal structures. The results for several uniaxial stress problems and an explosive-forming problem are shown.

  8. High threshold distributed quantum computing with three-qubit nodes

    Science.gov (United States)

    Li, Ying; Benjamin, Simon C.

    2012-09-01

    In the distributed quantum computing paradigm, well-controlled few-qubit ‘nodes’ are networked together by connections which are relatively noisy and failure prone. A practical scheme must offer high tolerance to errors while requiring only simple (i.e. few-qubit) nodes. Here we show that relatively modest, three-qubit nodes can support advanced purification techniques and so offer robust scalability: the infidelity in the entanglement channel may be permitted to approach 10% if the infidelity in local operations is of order 0.1%. Our tolerance of network noise is therefore an order of magnitude beyond prior schemes, and our architecture remains robust even in the presence of considerable decoherence rates (memory errors). We compare the performance with that of schemes involving nodes of lower and higher complexity. Ion traps, and NV-centres in diamond, are two highly relevant emerging technologies: they possess the requisite properties of good local control, rapid and reliable readout, and methods for entanglement-at-a-distance.

  9. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  10. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  11. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  14. Diagnostic value of high resolutional computed tomography of spine

    Energy Technology Data Exchange (ETDEWEB)

    Yang, S. M.; Im, S. K.; Sohn, M. H.; Lim, K. Y.; Kim, J. K.; Choi, K. C. [Jeonbug National University College of Medicine, Seoul (Korea, Republic of)

    1984-03-15

    Non-enhanced high resolution computed tomography provide clear visualization of soft tissue in the canal and bony details of spine, particularly of the lumbar spine. We observed 70 cases of spine CT using GE CT/T 8800 scanner during the period from Dec. 1982 to Sep. 1983 at Jeonbug National University Hospital. The results were as follows: 1. The sex distribution of cases were 55 males and 15 females : age was from 17 years to 67 years; sites were 11 cervical spine, 5 thoracic spine and 54 lumbosacral spine. 2. CT diagnosis showed 44 cases of lumbar disc herniation, 7 cases of degenerative disease, 3 cases of spine fracture and each 1 cases of cord tumor, metastatic tumor, spontaneous epidural hemorrhage, epidural abscess, spine tbc., meningocele with diastematomyelia. 3. Sites of herniated nucleus pulposus were 34 cases (59.6%) between L4-5 interspace and 20 cases (35.1%) between L5-S1 interspace. 13 cases (29.5%) of lumbar disc herniation disclosed multiple lesions. Location of herniation were central type in 28 cases(49.1%), right-central type in 12 cases(21.2%), left-central type in 11 cases (19.2%) and far lateral type in 6 cases(10.5%). 4. CT findings of herniated nucleus pulposus were as follows : focal protrusion of posterior disc margin and obliteration of anterior epidural fat in all cases, dural sac indentation in 26 cases(45.6%), soft tissue mass in epidural fat in 21 cases(36.8%), displacement or compression of nerve root sheath in 12 cases(21%). 5. Multiplanar reformatted images and Blink mode provide more effective evaluation about definite level and longitudinal dimension of lesion, such as obscure disc herniation, spine fracture, cord tumor and epidural abscess. 6. Non-enhanced and enhanced high resolutional computed tomography were effectively useful in demonstrating compression or displacement of spinal cord and nerve root, examing congenital anomaly such as meningocele and primary or metastatic spinal lesions.

  15. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  16. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    Science.gov (United States)

    Adakin, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Sukharev, A.; Zaytsev, A.

    2011-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currentiy we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. Recendy a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  17. Securing Cloud Infrastructure for High Performance Scientific Computations Using Cryptographic Techniques

    OpenAIRE

    Patra, G. K.; Nilotpal Chakraborty

    2014-01-01

    In today's scenario, a large scale of engineering and scientific applications requires high performance computation power in order to simulate various models. Scientific and Engineering models such as Climate Modeling, Weather Forecasting, Large Scale Ocean Modeling, Cyclone Prediction etc require parallel processing of data on high performance computing infrastructure. With the rise of cloud computing, it would be great if such high performance computations can be provided as a service to th...

  18. The Future of Software Engineering for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pope, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-16

    DOE ASCR requested that from May through mid-July 2015 a study group identify issues and recommend solutions from a software engineering perspective transitioning into the next generation of High Performance Computing. The approach used was to ask some of the DOE complex experts who will be responsible for doing this work to contribute to the study group. The technique used was to solicit elevator speeches: a short and concise write up done as if the author was a speaker with only a few minutes to convince a decision maker of their top issues. Pages 2-18 contain the original texts of the contributed elevator speeches and end notes identifying the 20 contributors. The study group also ranked the importance of each topic, and those scores are displayed with each topic heading. A perfect score (and highest priority) is three, two is medium priority, and one is lowest priority. The highest scoring topic areas were software engineering and testing resources; the lowest scoring area was compliance to DOE standards. The following two paragraphs are an elevator speech summarizing the contributed elevator speeches. Each sentence or phrase in the summary is hyperlinked to its source via a numeral embedded in the text. A risk one liner has also been added to each topic to allow future risk tracking and mitigation.

  19. Towards New Metrics for High-Performance Computing Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [ORNL; Ashraf, Rizwan A [ORNL; Engelmann, Christian [ORNL

    2017-01-01

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, we develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.

  20. Computational assessment of several hydrogen-free high energy compounds.

    Science.gov (United States)

    Tan, Bisheng; Huang, Ming; Long, Xinping; Li, Jinshan; Fan, Guijuan

    2016-01-01

    Tetrazino-tetrazine-tetraoxide (TTTO) is an attractive high energy compound, but unfortunately, it is not yet experimentally synthesized so far. Isomerization of TTTO leads to its five isomers, bond-separation energies were empolyed to compare the global stability of six compounds, it is found that isomer 1 has the highest bond-separation energy (1204.6kJ/mol), compared with TTTO (1151.2kJ/mol); thermodynamic properties of six compounds were theoretically calculated, including standard formation enthalpies (solid and gaseous), standard fusion enthalpies, standard vaporation enthalpies, standard sublimation enthalpies, lattice energies and normal melting points, normal boiling points; their detonation performances were also computed, including detonation heat (Q, cal/g), detonation velocity (D, km/s), detonation pressure (P, GPa) and impact sensitivity (h50, cm), compared with TTTO (Q=1311.01J/g, D=9.228km/s, P=40.556GPa, h50=12.7cm), isomer 5 exhibites better detonation performances (Q=1523.74J/g, D=9.389km/s, P=41.329GPa, h50= 28.4cm). Copyright © 2015 Elsevier Inc. All rights reserved.

  1. High-definition three-dimensional television disparity map computation

    Science.gov (United States)

    Chammem, Afef; Mitrea, Mihai; Prêteux, Françoise

    2012-10-01

    By reconsidering some two-dimensional video inherited approaches and by adapting them to the stereoscopic video content and to the human visual system peculiarities, a new disparity map is designed. First, the inner relation between the left and the right views is modeled by some weights discriminating between the horizontal and vertical disparities. Second, the block matching operation is achieved by considering a visual related measure (normalized cross correlation) instead of the traditional pixel differences (mean squared error or sum of absolute differences). The advanced three-dimensional (3-D) video-new three step search (3DV-NTSS) disparity map (3-D Video-New Three Step Search) is benchmarked against two state-of-the-art algorithms, namely NTSS and full-search MPEG (FS-MPEG), by successively considering two corpora. The first corpus was organized during the 3DLive French national project and regroups 20 min of stereoscopic video sequences. The second one, with similar size, is provided by the MPEG community. The experimental results demonstrate the effectiveness of 3DV-NTSS in both reconstructed image quality (average gains between 3% and 7% in both PSNR and structural similarity, with a singular exception) and computational cost (search operation number reduced by average factors between 1.3 and 13). The 3DV-NTSS was finally validated by designing a watermarking method for high definition 3-D TV content protection.

  2. Using a Computer Animation to Teach High School Molecular Biology

    Science.gov (United States)

    Rotbain, Yosi; Marbach-Ad, Gili; Stavy, Ruth

    2008-01-01

    We present an active way to use a computer animation in secondary molecular genetics class. For this purpose we developed an activity booklet that helps students to work interactively with a computer animation which deals with abstract concepts and processes in molecular biology. The achievements of the experimental group were compared with those…

  3. Multimodal Information Presentation for High-Load Human Computer Interaction

    NARCIS (Netherlands)

    Cao, Y.

    2011-01-01

    This dissertation addresses multimodal information presentation in human computer interaction. Information presentation refers to the manner in which computer systems/interfaces present information to human users. More specifically, the focus of our work is not on which information to present, but

  4. Computer Science in High School Graduation Requirements. ECS Education Trends

    Science.gov (United States)

    Zinth, Jennifer Dounay

    2015-01-01

    Computer science and coding skills are widely recognized as a valuable asset in the current and projected job market. The Bureau of Labor Statistics projects 37.5 percent growth from 2012 to 2022 in the "computer systems design and related services" industry--from 1,620,300 jobs in 2012 to an estimated 2,229,000 jobs in 2022. Yet some…

  5. On the impact of quantum computing technology on future developments in high-performance scientific computing

    NARCIS (Netherlands)

    Möller, M.; Vuik, C.

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to

  6. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  7. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  9. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  10. Path Not Found: Disparities in Access to Computer Science Courses in California High Schools

    Science.gov (United States)

    Martin, Alexis; McAlear, Frieda; Scott, Allison

    2015-01-01

    "Path Not Found: Disparities in Access to Computer Science Courses in California High Schools" exposes one of the foundational causes of underrepresentation in computing: disparities in access to computer science courses in California's public high schools. This report provides new, detailed data on these disparities by student body…

  11. Using Preservice Teachers to Improve Computer Skills of At-Risk Alternative High School Students

    Science.gov (United States)

    Ward, Martin J.; Kester, Donald; Kouzekanani, Kamiar

    2009-01-01

    This research examined the impact of preservice teachers delivering individualized instruction of basic computer skills to at-risk, ethnic minority alternative high school students in an urban school district in South Texas. The alternative high school students' achievement of computer skills, motivation to use computers, and self-efficacy as…

  12. Speeding up ecological and evolutionary computations in R; essentials of high performance computing for biologists.

    Science.gov (United States)

    Visser, Marco D; McMahon, Sean M; Merow, Cory; Dixon, Philip M; Record, Sydne; Jongejans, Eelke

    2015-03-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1-S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research.

  13. Speeding up ecological and evolutionary computations in R; essentials of high performance computing for biologists.

    Directory of Open Access Journals (Sweden)

    Marco D Visser

    2015-03-01

    Full Text Available Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1-S3 Texts that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster. By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research.

  14. High Fidelity Simulation of Liquid Jet in Cross-flow Using High Performance Computing

    Science.gov (United States)

    Soteriou, Marios; Li, Xiaoyi

    2011-11-01

    High fidelity, first principles simulation of atomization of a liquid jet by a fast cross-flowing gas can help reveal the controlling physics of this complicated two-phase flow of engineering interest. The turn-around execution time of such a simulation is prohibitively long using typically available computational resources today (i.e. parallel systems with ~O(100) CPUs). This is due to multiscale nature of the problem which requires the use of fine grids and time steps. In this work we present results from such a simulation performed on a state of the art massively parallel system available at Oakridge Leadership Computing Facility (OLCF). Scalability of the computational algorithm to ~2000 CPUs is demonstrated on grids of up to 200 million nodes. As a result, a simulation at intermediate Weber number becomes possible on this system. Results are in agreement with detailed experiment measurements of liquid column trajectory, breakup location, surface wavelength, onset of surface stripping as well as droplet size and velocity after primary breakup. Moreover, this uniform grid simulation is used as a base case for further code enhancement by evaluating the feasibility of employing Adaptive Mesh Refinement (AMR) near the liquid-gas interface as a means of mitigating computational cost.

  15. Factors Influencing Junior High School Teachers' Computer-Based Instructional Practices Regarding Their Instructional Evolution Stages

    National Research Council Canada - National Science Library

    Ying-Shao Hsu; Hsin-Kai Wu; Fu-Kwun Hwang

    2007-01-01

    ... computer-based instructional evolution. In this study of approximately six hundred junior high school science and mathematics teachers in Taiwan who have integrated computing technology into their instruction, we correlated each teacher's...

  16. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  17. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  18. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  19. Lower-crustal flow and detachment in the North American Cordillera: a consequence of Cordillera-wide high temperatures

    Science.gov (United States)

    Hyndman, R. D.

    2017-06-01

    In this paper, I make the case for widespread lower-crustal detachment and flow in the North American Cordillera. An indicator that geologically recent flow has occurred comes from seismic structure data showing the crust in most of the Cordillera from Mexico to Alaska is uniformly thin, 33 ± 3 km, with a remarkably flat Moho. The flat Moho is in spite of extensive normal faulting and shortening that might be expected to deform the Moho. It has been concluded previously that the high topographic elevations are due to thermal expansion from Cordillera-wide high temperatures compared to stable areas, not due to a crustal root. I argue that the constant crustal thickness and flat Moho also are a consequence of temperatures sufficiently hot for flow in the lower crust. Lower-crust detachment and flow has previously been inferred for Tibet and the high Andes where the crust is thick such that unusually high temperatures are expected. More surprising is the similar conclusion for the Basin and Range of western USA where the crust is thin, but high temperatures have been inferred to result from current extension. There are now adequate data to conclude the Basin and Range is not unique in crustal thickness or in temperature. The crust in most of the Cordillera is similarly hot in common with many other backarcs. Five thermal constraints are discussed that indicate that for most of the Cordillera, the temperature at the Moho is 800-850 °C compared to 400-450 °C in stable areas. At these temperatures, the effective viscosity is low enough for flow near the base of the crust. The backarc Moho may be viewed as a boundary between almost 'liquid' lower crust over a higher viscosity, but still weak upper mantle. The temperatures are sufficiently high for the Moho to relax to a nearly horizontal gravitational equipotential over a few tens of millions of years. The inference of a weak lower crust also suggests that topography over horizontal scales of over 100 km must be short

  20. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  1. Morphable Computer Architectures for Highly Energy Aware Systems

    National Research Council Canada - National Science Library

    Kogge, Peter

    2004-01-01

    To achieve a revolutionary reduction in overall power consumption, computing systems must be constructed out of both inherently low-power structures and power-aware or energy-aware hardware and software subsystems...

  2. High Interactivity Visualization Software for Large Computational Data Sets Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a collection of computer tools and libraries called SciViz that enable researchers to visualize large scale data sets on HPC resources remotely...

  3. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  4. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort addresses a need for accurate computational models to support aeroassist and entry vehicle system design over a broad range of flight conditions...

  5. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...

  6. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  7. Two-dimensional computer simulation of high intensity proton beams

    CERN Document Server

    Lapostolle, Pierre M

    1972-01-01

    A computer program has been developed which simulates the two- dimensional transverse behaviour of a proton beam in a focusing channel. The model is represented by an assembly of a few thousand 'superparticles' acted upon by their own self-consistent electric field and an external focusing force. The evolution of the system is computed stepwise in time by successively solving Poisson's equation and Newton's law of motion. Fast Fourier transform techniques are used for speed in the solution of Poisson's equation, while extensive area weighting is utilized for the accurate evaluation of electric field components. A computer experiment has been performed on the CERN CDC 6600 computer to study the nonlinear behaviour of an intense beam in phase space, showing under certain circumstances a filamentation due to space charge and an apparent emittance growth. (14 refs).

  8. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available running on up to 100 processors (cores) (W. Smith & Todorov, 2006). DL_POLY_3 (including 3.09) utilises a static/equi-spacial Domain Decomposition parallelisation strategy in which the simulation cell (comprising the atoms, ions or molecules) is divided..., 1997; Lange et al., 2011). Traditionally, it is expected that codes should scale linearly when one increases computational resources such as compute nodes or servers (Chamberlain, Chace, & Patil, 1998; Gropp & Snir, 2009). However, several studies...

  9. High Performance Computing Innovation Service Portal Study (HPC-ISP)

    Science.gov (United States)

    2009-04-01

    based electronic commerce interface for the goods and services available through the brokerage service. This infrastructure will also support the... electronic commerce backend functionality for third parties that want to sell custom computing services. • Tailored Industry Portals are web portals for...broker shown in Figure 8 is essentially a web server that provides remote access to computing and software resources through an electronic commerce

  10. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  11. Computational Modeling and High Performance Computing in Advanced Materials Processing, Synthesis, and Design

    Science.gov (United States)

    2014-12-07

    methods Journal of Computational Physics, Volume 200, Issue 2, (2004) 251-266. [17] Andersen C. H., Molecular dynamics simulations at constant pressure...Physics, Volume 200, Issue 2, (2004) 251-266. 47 [22] Andersen C. H., Molecular dynamics simulations at constant pressure and/or temperature...thriving game industry [1, 2, 7]. The differences in CPU and GPU computational power can be traced back to the designs upon which the two are predicated

  12. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  13. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.

    Energy Technology Data Exchange (ETDEWEB)

    FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

    2001-10-12

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

  14. Issues in undergraduate education in computational science and high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Marchioro, T.L. II; Martin, D. [Ames Lab., IA (United States)

    1994-12-31

    The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computational science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?

  15. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  16. High-pressure fluid phase equilibria phenomenology and computation

    CERN Document Server

    Deiters, Ulrich K

    2012-01-01

    The book begins with an overview of the phase diagrams of fluid mixtures (fluid = liquid, gas, or supercritical state), which can show an astonishing variety when elevated pressures are taken into account; phenomena like retrograde condensation (single and double) and azeotropy (normal and double) are discussed. It then gives an introduction into the relevant thermodynamic equations for fluid mixtures, including some that are rarely found in modern textbooks, and shows how they can they be used to compute phase diagrams and related properties. This chapter gives a consistent and axiomatic approach to fluid thermodynamics; it avoids using activity coefficients. Further chapters are dedicated to solid-fluid phase equilibria and global phase diagrams (systematic search for phase diagram classes). The appendix contains numerical algorithms needed for the computations. The book thus enables the reader to create or improve computer programs for the calculation of fluid phase diagrams. introduces phase diagram class...

  17. DIRAC: A Scalable Lightweight Architecture for High Throughput Computing

    CERN Document Server

    Garonne, V; Stokes-Rees, I

    2004-01-01

    DIRAC (Distributed Infrastructure with Remote Agent Control) has been developed by the CERN LHCb physics experiment to facilitate large scale simulation and user analysis tasks spread across both grid and non-grid computing resources. It consists of a small set of distributed stateless Core Services, which are centrally managed, and Agents which are managed by each computing site. DIRAC utilizes concepts from existing distributed computing models to provide a lightweight, robust, and flexible system. This paper will discuss the architecture, performance, and implementation of the DIRAC system which has recently been used for an intensive physics simulation involving more than forty sites, 90 TB of data, and in excess of one thousand 1 GHz processor-years.

  18. A PROFICIENT MODEL FOR HIGH END SECURITY IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    R. Bala Chandar

    2014-01-01

    Full Text Available Cloud computing is an inspiring technology due to its abilities like ensuring scalable services, reducing the anxiety of local hardware and software management associated with computing while increasing flexibility and scalability. A key trait of the cloud services is remotely processing of data. Even though this technology had offered a lot of services, there are a few concerns such as misbehavior of server side stored data , out of control of data owner's data and cloud computing does not control the access of outsourced data desired by the data owner. To handle these issues, we propose a new model to ensure the data correctness for assurance of stored data, distributed accountability for authentication and efficient access control of outsourced data for authorization. This model strengthens the correctness of data and helps to achieve the cloud data integrity, supports data owner to have control on their own data through tracking and improves the access control of outsourced data.

  19. Short-term effects of implemented high intensity shoulder elevation during computer work

    Directory of Open Access Journals (Sweden)

    Madeleine Pascal

    2009-08-01

    Full Text Available Abstract Background Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. Methods 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder elevation. RPE was reported, productivity (drawings per min measured, and bipolar surface electromyography (EMG recorded from the dominant upper trapezius during pauses and sessions of computer work. Repeated measure ANOVA with Bonferroni corrected post-hoc tests was applied for the statistical analyses. Results The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular trapezius part during the subsequent pause from computer work (p Conclusion Since a preceding high intensity shoulder elevation did not impose a negative impact on perceived effort, productivity or upper trapezius activity during computer work, implementation of high intensity contraction during computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a

  20. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  1. Short-term effects of implemented high intensity shoulder elevation during computer work

    DEFF Research Database (Denmark)

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal

    2009-01-01

    contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction...... on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. METHODS: 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder....... RESULTS: The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular) trapezius...

  2. High performance computing system for flight simulation at NASA Langley

    Science.gov (United States)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1991-01-01

    The computer architecture and components used in the NASA Langley Advanced Real-Time Simulation System (ARTSS) are briefly described and illustrated with diagrams and graphs. Particular attention is given to the advanced Convex C220 processing units, the UNIX-based operating system, the software interface to the fiber-optic-linked Computer Automated Measurement and Control system, configuration-management and real-time supervisor software, ARTSS hardware modifications, and the current implementation status. Simulation applications considered include the Transport Systems Research Vehicle, the Differential Maneuvering Simulator, the General Aviation Simulator, and the Visual Motion Simulator.

  3. High school computer science education paves the way for higher education: the Israeli case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-07-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to computer science in high school and pursuing computing in higher education. We also examine the gender gap, in the context of high school computer science education. We show that in Israel, students who took the high-level computer science matriculation exam were more likely to pursue computing in higher education. Regarding the issue of gender, we will show that, in general, in Israel the difference between males and females who take computer science in high school is relatively small, and a larger, though still not very large difference exists only for the highest exam level. In addition, exposing females to high-level computer science in high school has more relative impact on pursuing higher education in computing.

  4. High-Throughput Computing on High-Performance Platforms: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Matteo, Turilli [Rutgers University; Angius, Alessio [Rutgers University; Oral, H Sarp [ORNL; De, K [University of Texas at Arlington; Klimentov, A [Brookhaven National Laboratory (BNL); Wells, Jack C. [ORNL; Jha, S [Rutgers University

    2017-10-01

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.

  5. Antecedents and consequences of drug abuse in rats selectively bred for high and low response to novelty.

    Science.gov (United States)

    Flagel, Shelly B; Waselus, Maria; Clinton, Sarah M; Watson, Stanley J; Akil, Huda

    2014-01-01

    Human genetic and epidemiological studies provide evidence that only a subset of individuals who experiment with potentially addictive drugs become addicts. What renders some individuals susceptible to addiction remains to be determined, but most would agree that there is no single trait underlying the disorder. However, there is evidence in humans that addiction liability has a genetic component, and that certain personality characteristics related to temperament (e.g. the sensation-seeking trait) are associated with individual differences in addiction liability. Consequently, we have used a selective breeding strategy based on locomotor response to a novel environment to generate two lines of rats with distinct behavioral characteristics. We have found that the resulting phenotypes differ on a number of neurobehavioral dimensions relevant to addiction. Relative to bred low-responder (bLR) rats, bred high-responder (bHR) rats exhibit increased exploratory behavior, are more impulsive, more aggressive, seek stimuli associated with rewards, and show a greater tendency to relapse. We therefore utilize this unique animal model to parse the genetic, neural and environmental factors that contribute to addiction liability. Our work shows that the glucocorticoid receptor (GR), dopaminergic molecules, and members of the fibroblast growth factor family are among the neurotransmitters and neuromodulators that play a role in both the initial susceptibility to addiction as well as the altered neural responses that follow chronic drug exposure. Moreover, our findings suggest that the hippocampus plays a major role in mediating vulnerability to addiction. It is hoped that this work will emphasize the importance of personalized treatment strategies and identify novel therapeutic targets for humans suffering from addictive disorders. This article is part of a Special Issue entitled 'NIDA 40th Anniversary Issue'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Short-term effects of implemented high intensity shoulder elevation during computer work.

    Science.gov (United States)

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal; Olsen, Henrik B; Søgaard, Karen; Holtermann, Andreas

    2009-08-10

    Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder elevation. RPE was reported, productivity (drawings per min) measured, and bipolar surface electromyography (EMG) recorded from the dominant upper trapezius during pauses and sessions of computer work. Repeated measure ANOVA with Bonferroni corrected post-hoc tests was applied for the statistical analyses. The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular) trapezius part during the subsequent pause from computer work (p shoulder elevation did not impose a negative impact on perceived effort, productivity or upper trapezius activity during computer work, implementation of high intensity contraction during computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a pause with preceding high intensity contraction requires further investigation before high

  7. High Performance Computing Assets for Ocean Acoustics Research

    Science.gov (United States)

    2016-11-18

    can be tested against theory , practicality of computational methods can be determined, and studies of underwater acoustics phenomena can be...our models. APPROACH As outlined above, each processor in the acoustic calculation must access a large amount of memory at tills time. This is

  8. Are certain college students prone to experiencing excessive alcohol-related consequences? Predicting membership in a high-risk subgroup using pre-college profiles.

    Science.gov (United States)

    Varvil-Weld, Lindsey; Mallett, Kimberly A; Turrisi, Rob; Cleveland, Michael J; Abar, Caitlin C

    2013-07-01

    Previous research identified a high-risk subgroup of students who experience high levels of multiple and repeated alcohol-related consequences (MRC group). Although they consist of 20% of the population and account for nearly 50% of the consequences, the MRC group has not been the focus of etiological or prevention research. The present study identified pre-college profiles of psychosocial and behavioral characteristics and examined the association between these profiles and membership in the MRC group. The sample consisted of 370 first-year college students (57% female) recruited in the summer before college. Participants reported on typical drinking, alcohol-related risky and protective drinking behaviors, alcohol beliefs, descriptive and injunctive norms, and alcohol-related consequences at three time points over 15 months. Latent profile analysis identified four baseline student profiles: extreme-consequence drinkers, high-risk drinkers, protective drinkers, and nondrinkers. Logistic regression revealed that, when the high-risk drinkers were used as the reference group, both the protective drinkers and the nondrinkers were significantly less likely to be members of the MRC group, whereas the extreme-consequence drinkers were at increased odds of being in the MRC group, even after first-year drinking was controlled for. Student profiles and previously identified parental profiles both had unique main effects on MRC group membership, but there was no significant interaction between parental and student profiles. Findings suggest ways that brief interventions can be tailored for students and parents in relation to the MRC group.

  9. Session on High Speed Civil Transport Design Capability Using MDO and High Performance Computing

    Science.gov (United States)

    Rehder, Joe

    2000-01-01

    Since the inception of CAS in 1992, NASA Langley has been conducting research into applying multidisciplinary optimization (MDO) and high performance computing toward reducing aircraft design cycle time. The focus of this research has been the development of a series of computational frameworks and associated applications that increased in capability, complexity, and performance over time. The culmination of this effort is an automated high-fidelity analysis capability for a high speed civil transport (HSCT) vehicle installed on a network of heterogeneous computers with a computational framework built using Common Object Request Broker Architecture (CORBA) and Java. The main focus of the research in the early years was the development of the Framework for Interdisciplinary Design Optimization (FIDO) and associated HSCT applications. While the FIDO effort was eventually halted, work continued on HSCT applications of ever increasing complexity. The current application, HSCT4.0, employs high fidelity CFD and FEM analysis codes. For each analysis cycle, the vehicle geometry and computational grids are updated using new values for design variables. Processes for aeroelastic trim, loads convergence, displacement transfer, stress and buckling, and performance have been developed. In all, a total of 70 processes are integrated in the analysis framework. Many of the key processes include automatic differentiation capabilities to provide sensitivity information that can be used in optimization. A software engineering process was developed to manage this large project. Defining the interactions among 70 processes turned out to be an enormous, but essential, task. A formal requirements document was prepared that defined data flow among processes and subprocesses. A design document was then developed that translated the requirements into actual software design. A validation program was defined and implemented to ensure that codes integrated into the framework produced the same

  10. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  11. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  12. A REST Model for High Throughput Scheduling in Computational Grids

    CERN Document Server

    Stokes-Rees, Ian James; McKeever, S

    2006-01-01

    Current grid computing architectures have been based on cluster management and batch queuing systems, extended to a distributed, federated domain. These have shown shortcomings in terms of scalability, stability, and modularity. To address these problems, this dissertation applies architectural styles from the Internet and Web to the domain of generic computational grids. Using the REST style, a flexible model for grid resource interaction is developed which removes the need for any centralised services or specific protocols, thereby allowing a range of implementations and layering of further functionality. The context for resource interaction is a generalisation and formalisation of the Condor ClassAd match-making mechanism. This set theoretic model is described in depth, including the advantages and features which it realises. This RESTful style is also motivated by operational experience with existing grid infrastructures, and the design, operation, and performance of a proto-RESTful grid middleware packag...

  13. Reconfigurable computer array: The bridge between high speed sensors and low speed computing

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, S.H.; Caffrey, M.P.; Dunham, M.E.

    1998-06-16

    A universal limitation of RF and imaging front-end sensors is that they easily produce data at a higher rate than any general-purpose computer can continuously handle. Therefore, Los Alamos National Laboratory has developed a custom Reconfigurable Computing Array board to support a large variety of processing applications including wideband RF signals, LIDAR and multi-dimensional imaging. The boards design exploits three key features to achieve its performance. First, there are large banks of fast memory dedicated to each reconfigurable processor and also shared between pairs of processors. Second, there are dedicated data paths between processors, and from a processor to flexible I/O interfaces. Third, the design provides the ability to link multiple boards into a serial and/or parallel structure.

  14. High Performance Computing and Visualization Infrastructure for Simultaneous Parallel Computing and Parallel Visualization Research

    Science.gov (United States)

    2016-11-09

    NVIDIA GPGPU’s, and 14 Intel Co-Phi processors. The visualization infrastructure is a next generation tiled Mini CAVE for semi-immersive visualization...compute nodes with 340 cores, 20 NVIDIA GPGPU’s, and 14 Intel Co-Phi processors. The visualization infrastructure is a next generation tiled Mini CAVE...gas engines, long-range acoustic propagation simulations, numerical modeling of nonlinear nanophotonic devices, and molecular dynamics simulations

  15. Positron Emission Tomography/Computed Tomography Imaging of Residual Skull Base Chordoma Before Radiotherapy Using Fluoromisonidazole and Fluorodeoxyglucose: Potential Consequences for Dose Painting

    Energy Technology Data Exchange (ETDEWEB)

    Mammar, Hamid, E-mail: hamid.mammar@unice.fr [Radiation Oncology Department, Antoine Lacassagne Center, Nice (France); CNRS-UMR 6543, Institute of Developmental Biology and Cancer, University of Nice Sophia Antipolis, Nice (France); Kerrou, Khaldoun; Nataf, Valerie [Department of Nuclear Medicine and Radiopharmacy, Tenon Hospital, and University Pierre et Marie Curie, Paris (France); Pontvert, Dominique [Proton Therapy Center of Orsay, Curie Institute, Paris (France); Clemenceau, Stephane [Department of Neurosurgery, Pitie-Salpetriere Hospital, Paris (France); Lot, Guillaume [Department of Neurosurgery, Adolph De Rothschild Foundation, Paris (France); George, Bernard [Department of Neurosurgery, Lariboisiere Hospital, Paris (France); Polivka, Marc [Department of Pathology, Lariboisiere Hospital, Paris (France); Mokhtari, Karima [Department of Pathology, Pitie-Salpetriere Hospital, Paris (France); Ferrand, Regis; Feuvret, Loiec; Habrand, Jean-louis [Proton Therapy Center of Orsay, Curie Institute, Paris (France); Pouyssegur, Jacques; Mazure, Nathalie [CNRS-UMR 6543, Institute of Developmental Biology and Cancer, University of Nice Sophia Antipolis, Nice (France); Talbot, Jean-Noeel [Department of Nuclear Medicine and Radiopharmacy, Tenon Hospital, and University Pierre et Marie Curie, Paris (France)

    2012-11-01

    Purpose: To detect the presence of hypoxic tissue, which is known to increase the radioresistant phenotype, by its uptake of fluoromisonidazole (18F) (FMISO) using hybrid positron emission tomography/computed tomography (PET/CT) imaging, and to compare it with the glucose-avid tumor tissue imaged with fluorodeoxyglucose (18F) (FDG), in residual postsurgical skull base chordoma scheduled for radiotherapy. Patients and Methods: Seven patients with incompletely resected skull base chordomas were planned for high-dose radiotherapy (dose {>=}70 Gy). All 7 patients underwent FDG and FMISO PET/CT. Images were analyzed qualitatively by visual examination and semiquantitatively by computing the ratio of the maximal standardized uptake value (SUVmax) of the tumor and cerebellum (T/C R), with delineation of lesions on conventional imaging. Results: Of the eight lesion sites imaged with FDG PET/CT, only one was visible, whereas seven of nine lesions were visible on FMISO PET/CT. The median SUVmax in the tumor area was 2.8 g/mL (minimum 2.1; maximum 3.5) for FDG and 0.83 g/mL (minimum 0.3; maximum 1.2) for FMISO. The T/C R values ranged between 0.30 and 0.63 for FDG (median, 0.41) and between 0.75 and 2.20 for FMISO (median,1.59). FMISO T/C R >1 in six lesions suggested the presence of hypoxic tissue. There was no correlation between FMISO and FDG uptake in individual chordomas (r = 0.18, p = 0.7). Conclusion: FMISO PET/CT enables imaging of the hypoxic component in residual chordomas. In the future, it could help to better define boosted volumes for irradiation and to overcome the radioresistance of these lesions. No relationship was founded between hypoxia and glucose metabolism in these tumors after initial surgery.

  16. Metabolic consequences of sepsis-induced acute lung injury revealed by plasma 1H-nuclear magnetic resonance quantitative metabolomics and computational analysis

    Science.gov (United States)

    Serkova, Natalie J.; Karnovsky, Alla; Guire, Kenneth; Paine, Robert; Standiford, Theodore J.

    2011-01-01

    Metabolomics is an emerging component of systems biology that may be a viable strategy for the identification and validation of physiologically relevant biomarkers. Nuclear magnetic resonance (NMR) spectroscopy allows for establishing quantitative data sets for multiple endogenous metabolites without preconception. Sepsis-induced acute lung injury (ALI) is a complex and serious illness associated with high morbidity and mortality for which there is presently no effective pharmacotherapy. The goal of this study was to apply 1H-NMR based quantitative metabolomics with subsequent computational analysis to begin working towards elucidating the plasma metabolic changes associated with sepsis-induced ALI. To this end, this pilot study generated quantitative data sets that revealed differences between patients with ALI and healthy subjects in the level of the following metabolites: total glutathione, adenosine, phosphatidylserine, and sphingomyelin. Moreover, myoinositol levels were associated with acute physiology scores (APS) (ρ = −0.53, P = 0.05, q = 0.25) and ventilator-free days (ρ = −0.73, P = 0.005, q = 0.01). There was also an association between total glutathione and APS (ρ = 0.56, P = 0.04, q = 0.25). Computational network analysis revealed a distinct metabolic pathway for each metabolite. In summary, this pilot study demonstrated the feasibility of plasma 1H-NMR quantitative metabolomics because it yielded a physiologically relevant metabolite data set that distinguished sepsis-induced ALI from health. In addition, it justifies the continued study of this approach to determine whether sepsis-induced ALI has a distinct metabolic phenotype and whether there are predictive biomarkers of severity and outcome in these patients. PMID:20889676

  17. Metabolic consequences of sepsis-induced acute lung injury revealed by plasma ¹H-nuclear magnetic resonance quantitative metabolomics and computational analysis.

    Science.gov (United States)

    Stringer, Kathleen A; Serkova, Natalie J; Karnovsky, Alla; Guire, Kenneth; Paine, Robert; Standiford, Theodore J

    2011-01-01

    Metabolomics is an emerging component of systems biology that may be a viable strategy for the identification and validation of physiologically relevant biomarkers. Nuclear magnetic resonance (NMR) spectroscopy allows for establishing quantitative data sets for multiple endogenous metabolites without preconception. Sepsis-induced acute lung injury (ALI) is a complex and serious illness associated with high morbidity and mortality for which there is presently no effective pharmacotherapy. The goal of this study was to apply ¹H-NMR based quantitative metabolomics with subsequent computational analysis to begin working towards elucidating the plasma metabolic changes associated with sepsis-induced ALI. To this end, this pilot study generated quantitative data sets that revealed differences between patients with ALI and healthy subjects in the level of the following metabolites: total glutathione, adenosine, phosphatidylserine, and sphingomyelin. Moreover, myoinositol levels were associated with acute physiology scores (APS) (ρ = -0.53, P = 0.05, q = 0.25) and ventilator-free days (ρ = -0.73, P = 0.005, q = 0.01). There was also an association between total glutathione and APS (ρ = 0.56, P = 0.04, q = 0.25). Computational network analysis revealed a distinct metabolic pathway for each metabolite. In summary, this pilot study demonstrated the feasibility of plasma ¹H-NMR quantitative metabolomics because it yielded a physiologically relevant metabolite data set that distinguished sepsis-induced ALI from health. In addition, it justifies the continued study of this approach to determine whether sepsis-induced ALI has a distinct metabolic phenotype and whether there are predictive biomarkers of severity and outcome in these patients.

  18. An Embedded System for applying High Performance Computing in Educational Learning Activity

    Directory of Open Access Journals (Sweden)

    Irene Erlyn Wina Rachmawan

    2016-08-01

    Full Text Available HPC (High Performance Computing has become more popular in the last few years. With the benefits on high computational power, HPC has impact on industry, scientific research and educational activities. Implementing HPC as a curriculum in universities could be consuming a lot of resources because well-known HPC system are using Personal Computer or Server. By using PC as the practical moduls it is need great resources and spaces.  This paper presents an innovative high performance computing cluster system to support education learning activities in HPC course with small size, low cost, and yet powerful enough. In recent years, High Performance computing usually implanted in cluster computing and require high specification computer and expensive cost. It is not efficient applying High Performance Computing in Educational research activiry such as learning in Class. Therefore, our proposed system is created with inexpensive component by using Embedded System to make High Performance Computing applicable for leaning in the class. Students involved in the construction of embedded system, built clusters from basic embedded and network components, do benchmark performance, and implement simple parallel case using the cluster.  In this research we performed evaluation of embedded systems comparing with i5 PC, the results of our embedded system performance of NAS benchmark are similar with i5 PCs. We also conducted surveys about student learning satisfaction that with embedded system students are able to learn about HPC from building the system until making an application that use HPC system.

  19. Perceived vulnerability moderates the relations between the use of protective behavioral strategies and alcohol use and consequences among high-risk young adults.

    Science.gov (United States)

    Garcia, Tracey A; Fairlie, Anne M; Litt, Dana M; Waldron, Katja A; Lewis, Melissa A

    2018-02-15

    Drinking protective behavioral strategies (PBS) have been associated with reductions in alcohol use and alcohol-related consequences in young adults. PBS subscales, Limiting/Stopping (LS), Manner of Drinking (MOD), and Serious Harm Reduction (SHR), have been examined in the literature; LS, MOD, and SHR have mixed support as protective factors. Understanding moderators between PBS and alcohol use and related consequences is an important development in PBS research in order to delineate when and for whom PBS use is effective in reducing harm from alcohol use. Perceptions of vulnerability to negative consequences, included in health-risk models, may be one such moderator. The current study examined whether two types of perceived vulnerability (perceived vulnerability when drinking; perceived vulnerability in uncomfortable/unfamiliar situations) moderated the relations between LS, MOD, SHR strategies and alcohol use and related negative consequences. High-risk young adults (N = 400; 53.75% female) recruited nationally completed measures of PBS, alcohol use and related consequences, and measures of perceived vulnerability. Findings demonstrated that perceived vulnerability when drinking moderated the relations between MOD strategies and alcohol use. The interactions between perceived vulnerability when drinking and PBS did not predict alcohol-related consequences. Perceived vulnerability in unfamiliar/uncomfortable social situations moderated relations between MOD strategies and both alcohol use and related negative consequences; no other significant interactions emerged. Across both perceived vulnerability types and MOD strategies, those with the highest levels of perceived vulnerability and who used MOD strategies the most had the greatest decrements in alcohol use and related negative consequences. Prevention and intervention implications are discussed. Copyright © 2018. Published by Elsevier Ltd.

  20. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E [ORNL; Schuman, Catherine D [ORNL; Young, Steven R [ORNL; Patton, Robert M [ORNL; Spedalieri, Federico [University of Southern California, Information Sciences Institute; Liu, Jeremy [University of Southern California, Information Sciences Institute; Yao, Ke-Thia [University of Southern California, Information Sciences Institute; Rose, Garrett [University of Tennessee (UT); Chakma, Gangotree [University of Tennessee (UT)

    2016-01-01

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

  1. High performance computing applied to simulation of the flow in pipes; Computacao de alto desempenho aplicada a simulacao de escoamento em dutos

    Energy Technology Data Exchange (ETDEWEB)

    Cozin, Cristiane; Lueders, Ricardo; Morales, Rigoberto E.M. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Dept. de Engenharia Mecanica

    2008-07-01

    In recent years, computer cluster has emerged as a real alternative to solution of problems which require high performance computing. Consequently, the development of new applications has been driven. Among them, flow simulation represents a real computational burden specially for large systems. This work presents a study of using parallel computing for numerical fluid flow simulation in pipelines. A mathematical flow model is numerically solved. In general, this procedure leads to a tridiagonal system of equations suitable to be solved by a parallel algorithm. In this work, this is accomplished by a parallel odd-oven reduction method found in the literature which is implemented on Fortran programming language. A computational platform composed by twelve processors was used. Many measures of CPU times for different tridiagonal system sizes and number of processors were obtained, highlighting the communication time between processors as an important issue to be considered when evaluating the performance of parallel applications. (author)

  2. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  3. Modern teaching methods in computer science courses at professional high schools

    OpenAIRE

    Mede, Gregor

    2012-01-01

    The purpose of the thesis is to demonstrate efficiency of collaborative and constructivistic approach to teaching computer science on the high school level of education. This approach is introduced in field of computer networking, regarding topic of wireless networks. At the beginning of theoretical part we present computer science teaching methods used today in high schools and e-learning materials already created for this purpose. Next we describe some other methods of teaching with col...

  4. The application of a computer data acquisition system to a new high temperature tribometer

    Science.gov (United States)

    Bonham, Charles D.; Dellacorte, Christopher

    1991-01-01

    The two data acquisition computer programs are described which were developed for a high temperature friction and wear test apparatus, a tribometer. The raw data produced by the tribometer and the methods used to sample that data are explained. In addition, the instrumentation and computer hardware and software are presented. Also shown is how computer data acquisition was applied to increase convenience and productivity on a high temperature tribometer.

  5. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  6. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  7. Energy-efficient high performance computing measurement and tuning

    CERN Document Server

    III, James H Laros; Kelly, Sue

    2012-01-01

    In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nod

  8. High-performance computational solutions in protein bioinformatics

    CERN Document Server

    Mrozek, Dariusz

    2014-01-01

    Recent developments in computer science enable algorithms previously perceived as too time-consuming to now be efficiently used for applications in bioinformatics and life sciences. This work focuses on proteins and their structures, protein structure similarity searching at main representation levels and various techniques that can be used to accelerate similarity searches. Divided into four parts, the first part provides a formal model of 3D protein structures for functional genomics, comparative bioinformatics and molecular modeling. The second part focuses on the use of multithreading for

  9. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  10. The Relationship between Utilization of Computer Games and Spatial Abilities among High School Students

    Science.gov (United States)

    Motamedi, Vahid; Yaghoubi, Razeyah Mohagheghyan

    2015-01-01

    This study aimed at investigating the relationship between computer game use and spatial abilities among high school students. The sample consisted of 300 high school male students selected through multi-stage cluster sampling. Data gathering tools consisted of a researcher made questionnaire (to collect information on computer game usage) and the…

  11. Computer Science in High School Graduation Requirements. ECS Education Trends (Updated)

    Science.gov (United States)

    Zinth, Jennifer

    2016-01-01

    Allowing high school students to fulfill a math or science high school graduation requirement via a computer science credit may encourage more student to pursue computer science coursework. This Education Trends report is an update to the original report released in April 2015 and explores state policies that allow or require districts to apply…

  12. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to…

  13. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  14. A high-performance brain-computer interface

    Science.gov (United States)

    Santhanam, Gopal; Ryu, Stephen I.; Yu, Byron M.; Afshar, Afsheen; Shenoy, Krishna V.

    2006-07-01

    Recent studies have demonstrated that monkeys and humans can use signals from the brain to guide computer cursors. Brain-computer interfaces (BCIs) may one day assist patients suffering from neurological injury or disease, but relatively low system performance remains a major obstacle. In fact, the speed and accuracy with which keys can be selected using BCIs is still far lower than for systems relying on eye movements. This is true whether BCIs use recordings from populations of individual neurons using invasive electrode techniques or electroencephalogram recordings using less- or non-invasive techniques. Here we present the design and demonstration, using electrode arrays implanted in monkey dorsal premotor cortex, of a manyfold higher performance BCI than previously reported. These results indicate that a fast and accurate key selection system, capable of operating with a range of keyboard sizes, is possible (up to 6.5 bits per second, or ~15wordsperminute, with 96 electrodes). The highest information throughput is achieved with unprecedentedly brief neural recordings, even as recording quality degrades over time. These performance results and their implications for system design should substantially increase the clinical viability of BCIs in humans.

  15. Computer Self-Efficacy among Senior High School Teachers in Ghana and the Functionality of Demographic Variables on Their Computer Self-Efficacy

    Science.gov (United States)

    Sarfo, Frederick Kwaku; Amankwah, Francis; Konin, Daniel

    2017-01-01

    The study is aimed at investigating 1) the level of computer self-efficacy among public senior high school (SHS) teachers in Ghana and 2) the functionality of teachers' age, gender, and computer experiences on their computer self-efficacy. Four hundred and Seven (407) SHS teachers were used for the study. The "Computer Self-Efficacy"…

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  17. Current developments in and importance of high-performance computing in drug discovery.

    Science.gov (United States)

    Pitera, Jed W

    2009-05-01

    A number of current trends that are being adopted to reshape the field of high-performance computing exist, including multi-core systems, accelerators, and software frameworks for large-scale intrinsically parallel applications. These trends intersect with recent developments in computational chemistry to provide new capabilities for computer-aided drug discovery. Although this review focuses primarily on the application domains of molecular modeling and biomolecular simulation, these computing changes are relevant for other computationally intensive tasks, such as instrument data processing and chemoinformatics.

  18. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  1. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  2. High performance computing and communications grand challenges program

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III; Kesselman, C.

    1994-10-01

    The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.

  3. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    Science.gov (United States)

    Blocksome, Michael A

    2013-07-02

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  4. BigData and computing challenges in high energy and nuclear physics

    Science.gov (United States)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  5. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  6. Visualization of RNA secondary structures using highly parallel computers.

    Science.gov (United States)

    Nakaya, A; Taura, K; Yamamoto, K; Yonezawa, A

    1996-06-01

    Results of RNA secondary structure prediction algorithm are usually given as a set of hydrogen bonds between bases. However, we cannot know the precise structure of an RNA molecule by only knowing which bases form hydrogen bonds. One way to understand the structure of an RNA molecule is to visualize it using a planar graph so that we can easily know the geometric relations among the substructures such as stacking regions and loops. To do this, we consider bases to be particles on a plane and introduce a repulsive force and an attractive force among these particles and determine their positions according to these forces. A naive algorithm requires O(N2) time but we can reduce it to O(NlogN) with an approximation algorithm which is often used in the area of N-body simulation. Our program is written in parallel object-oriented language 'Schematic' which is recently developed. Efficiency of our implementation on a parallel computer and results of visualization of secondary structures are presented using cadang-cadang coconut viroid as an example.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  8. Possibility of high performance quantum computation by superluminal evanescent photons in living systems.

    Science.gov (United States)

    Musha, Takaaki

    2009-06-01

    Penrose and Hameroff have suggested that microtubules in living systems function as quantum computers by utilizing evanescent photons. On the basis of the theorem that the evanescent photon is a superluminal particle, the possibility of high performance computation in living systems has been studied. From the theoretical analysis, it is shown that the biological brain can achieve large quantum bits computation compared with the conventional processors at room temperature.

  9. Short-term effects of implemented high intensity shoulder elevation during computer work

    OpenAIRE

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal; Olsen, Henrik B; S?gaard, Karen; Holtermann, Andreas

    2009-01-01

    Abstract Background Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during c...

  10. Preliminary evaluation of ultra-high pitch computed tomography enterography

    Energy Technology Data Exchange (ETDEWEB)

    Hardie, Andrew D.; Horst, Nicole D.; Mayes, Nicholas [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston (United States)], E-mail: andrewdhardie@gmail.com

    2012-12-15

    Background. CT enterography (CTE) is a valuable tool in the management of patients with inflammatory bowel disease. Reducing imaging time, reduced motion artifacts, and decreased radiation exposure are important goals for optimizing CTE examinations. Purpose. To assess the potential impact of new CT technology (ultra-high pitch CTE) for the ability to reduce scan time and also potentially reduce radiation exposure while maintaining image quality. Material and Methods. This retrospective study compared 13 patients who underwent ultra-high pitch CTE with 25 patients who underwent routine CTE on the same CT scanner with identical radiation emission settings. Total scan time and radiation exposure were recorded for each patient. Image quality was assessed by measurement of image noise and also qualitatively by two independent observers. Results. Total scan time was significantly lower for patients who underwent ultra-high pitch CTE (2.1 s {+-} 0.2) than by routine CTE (18.6 s {+-} 0.9) (P < 0.0001). The mean radiation exposure for ultra-high pitch CTE was also significantly lower (10.1 mGy {+-} 1.0) than routine CTE (15.8 mGy {+-} 4.5) (P < 0.0001). No significant difference in image noise was found between ultra-high pitch CTE (16.0 HU {+-} 2.5) and routine CTE (15.5 HU {+-} 3.7) (P > 0.74). There was also no significant difference in image quality noted by either of the two readers. Conclusion. Ultra-high pitch CTE can be performed more rapidly than standard CTE and offers the potential for radiation exposure reduction while maintaining image quality.

  11. The consequences of "Culture's consequences"

    DEFF Research Database (Denmark)

    Knudsen, Fabienne; Froholdt, Lisa Loloma

    2009-01-01

    , but it may also have unintentional outcomes. It may lead to a deterministic view of other cultures, thereby reinforcing prejudices and underestimating other forms of differences; it risks blinding the participants of the specific context of a given communicative situation. The article opens with a critical...... review of the theory of Geert Hofstede, the most renowned representative of this theoretical approach. The practical consequences of using such a concept of culture is then analysed by means of a critical review of an article applying Hofstede to cross-cultural crews in seafaring. Finally, alternative...... views on culture are presented. The aim of the article is, rather than to promote any specific theory, to reflect about diverse perspectives of cultural sense-making in cross-cultural encounters. Udgivelsesdato: Oktober...

  12. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  13. The comparison of high and standard definition computed ...

    African Journals Online (AJOL)

    Objectıve: The aim was to compare coronary high-definition CT (HDCT) with standard-definition CT (SDCT) angiography as to radiation dose, image quality and accuracy. Material and methods: 28 patients with history of coronary artery disease scanned by HDCT (Discovery CT750 HD) and SDCT (Somatom Definition AS).

  14. The comparison of high and standard definition computed ...

    African Journals Online (AJOL)

    Abstract. Objectıve: The aim was to compare coronary high-definition CT (HDCT) with standard-definition CT (SDCT) angiogra- phy as to radiation dose, image quality and accuracy. Material and methods: 28 patients with history of coronary artery disease scanned by HDCT (Discovery CT750 HD) and. SDCT (Somatom ...

  15. Computational analysis of high-throughput flow cytometry data.

    Science.gov (United States)

    Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo

    2012-08-01

    Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible.

  16. High-throughput screening, predictive modeling and computational embryology - Abstract

    Science.gov (United States)

    High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...

  17. High-throughput screening, predictive modeling and computational embryology

    Science.gov (United States)

    High-throughput screening (HTS) studies are providing a rich source of data that can be applied to profile thousands of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition to projects worldwide,...

  18. Computational analysis of high-throughput flow cytometry data

    Science.gov (United States)

    Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo

    2015-01-01

    Introduction Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Areas covered Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. Expert opinion There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible. PMID:22708834

  19. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    Science.gov (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  20. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  1. Designing high power targets with computational fluid dynamics (CFD)

    Energy Technology Data Exchange (ETDEWEB)

    Covrig, Silviu D. [JLAB

    2013-11-01

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 {micro}A rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 {micro}A beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  2. Designing high power targets with computational fluid dynamics (CFD)

    Energy Technology Data Exchange (ETDEWEB)

    Covrig, S. D. [Thomas Jefferson National Laboratory, Newport News, VA 23606 (United States)

    2013-11-07

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 μA rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 μA beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  4. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Michael Pernice

    2010-09-01

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  5. Comparing Computer Game and Traditional Lecture Using Experience Ratings from High and Low Achieving Students

    Science.gov (United States)

    Grimley, Michael; Green, Richard; Nilsen, Trond; Thompson, David

    2012-01-01

    Computer games are purported to be effective instructional tools that enhance motivation and improve engagement. The aim of this study was to investigate how tertiary student experiences change when instruction was computer game based compared to lecture based, and whether experiences differed between high and low achieving students. Participants…

  6. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    Science.gov (United States)

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  7. The DOE Program in HPCC: High-Performance Computing and Communications.

    Science.gov (United States)

    Department of Energy, Washington, DC. Office of Energy Research.

    This document reports to Congress on the progress that the Department of Energy has made in 1992 toward achieving the goals of the High Performance Computing and Communications (HPCC) program. Its second purpose is to provide a picture of the many programs administered by the Office of Scientific Computing under the auspices of the HPCC program.…

  8. Computational Fluency Performance Profile of High School Students with Mathematics Disabilities

    Science.gov (United States)

    Calhoon, Mary Beth; Emerson, Robert Wall; Flores, Margaret; Houchins, David E.

    2007-01-01

    The purpose of this descriptive study was to develop a computational fluency performance profile of 224 high school (Grades 9-12) students with mathematics disabilities (MD). Computational fluency performance was examined by grade-level expectancy (Grades 2-6) and skill area (whole numbers: addition, subtraction, multiplication, division;…

  9. A Survey of High-Quality Computational Libraries and their Impactin Science and Engineering Applications

    Energy Technology Data Exchange (ETDEWEB)

    Drummond, L.A.; Hernandez, V.; Marques, O.; Roman, J.E.; Vidal, V.

    2004-09-20

    Recently, a number of important scientific and engineering problems have been successfully studied and solved by means of computational modeling and simulation. Many of these computational models and simulations benefited from the use of available software tools and libraries to achieve high performance and portability. In this article, we present a reference matrix of the performance of robust, reliable and widely used tools mapped to scientific and engineering applications that use them. We aim at regularly maintaining and disseminating this matrix to the computational science community. This matrix will contain information on state-of-the-art computational tools, their applications and their use.

  10. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  11. Client/Server data serving for high performance computing

    Science.gov (United States)

    Wood, Chris

    1994-01-01

    This paper will attempt to examine the industry requirements for shared network data storage and sustained high speed (10's to 100's to thousands of megabytes per second) network data serving via the NFS and FTP protocol suite. It will discuss the current structural and architectural impediments to achieving these sorts of data rates cost effectively today on many general purpose servers and will describe and architecture and resulting product family that addresses these problems. The sustained performance levels that were achieved in the lab will be shown as well as a discussion of early customer experiences utilizing both the HIPPI-IP and ATM OC3-IP network interfaces.

  12. High performance computing in power and energy systems

    CERN Document Server

    Khaitan, Siddhartha Kumar

    2012-01-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would  need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, casc

  13. A Review of Lightweight Thread Approaches for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Castello, Adrian; Pena, Antonio J.; Seo, Sangmin; Mayo, Rafael; Balaji, Pavan; Quintana-Orti, Enrique S.

    2016-09-12

    High-level, directive-based solutions are becoming the programming models (PMs) of the multi/many-core architectures. Several solutions relying on operating system (OS) threads perfectly work with a moderate number of cores. However, exascale systems will spawn hundreds of thousands of threads in order to exploit their massive parallel architectures and thus conventional OS threads are too heavy for that purpose. Several lightweight thread (LWT) libraries have recently appeared offering lighter mechanisms to tackle massive concurrency. In order to examine the suitability of LWTs in high-level runtimes, we develop a set of microbenchmarks consisting of commonlyfound patterns in current parallel codes. Moreover, we study the semantics offered by some LWT libraries in order to expose the similarities between different LWT application programming interfaces. This study reveals that a reduced set of LWT functions can be sufficient to cover the common parallel code patterns and that those LWT libraries perform better than OS threads-based solutions in cases where task and nested parallelism are becoming more popular with new architectures.

  14. and consequences

    Directory of Open Access Journals (Sweden)

    P. Athanasopoulou

    2011-01-01

    Full Text Available (a Purpose: The purpose of this research is to identify the types of CSR initiatives employed by sports organisations; their antecedents, and their consequences for the company and society. (b Design/methodology/approach: This study is exploratory in nature. Two detailed case studies were conducted involving the football team and the basketball team of one professional, premier league club in Greece and their CSR initiatives. Both teams have the same name, they belong to one of the most popular teams in Greece with a large fan population; have both competed in International Competitions (UEFA’s Champion League; Final Four of the European Tournament and have realised many CSR initiatives in the past. The case studies involved in depth, personal interviews of managers responsible for CSR in each team. Case study data was triangulated with documentation and search of published material concerning CSR actions. Data was analysed with content analysis. (c Findings: Both teams investigated have undertaken various CSR activities the last 5 years, the football team significantly more than the basketball team. Major factors that affect CSR activity include pressure from leagues; sponsors; local community, and global organisations; orientation towards fulfilling their duty to society, and team CSR strategy. Major benefits from CSR include relief of vulnerable groups and philanthropy as well as a better reputation for the firm; increase in fan base; and finding sponsors more easily due to the social profile of the team. However, those benefits are not measured in any way although both teams observe increase in tickets sold; web site traffic and TV viewing statistics after CSR activities. Finally, promotion of CSR is mainly done through web sites; press releases; newspapers, and word-of-mouth communications. (d Research limitations/implications: This study involves only two case studies and has limited generalisability. Future research can extend the

  15. FY 1996 Blue Book: High Performance Computing and Communications: Foundations for America`s Information Future

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of...

  16. High speed television camera system processes photographic film data for digital computer analysis

    Science.gov (United States)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  17. Workshop on programming languages for high performance computing (HPCWPL): final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2007-05-01

    This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.

  18. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M&S) users. Performing large-scale, massively...

  19. FY 1997 Blue Book: High Performance Computing and Communications: Advancing the Frontiers of Information Technology

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of...

  20. Optical high-performance computing: introduction to the JOSA A and Applied Optics feature.

    Science.gov (United States)

    Caulfield, H John; Dolev, Shlomi; Green, William M J

    2009-08-01

    The feature issues in both Applied Optics and the Journal of the Optical Society of America A focus on topics of immediate relevance to the community working in the area of optical high-performance computing.

  1. High-speed linear optics quantum computing using active feed-forward.

    Science.gov (United States)

    Prevedel, Robert; Walther, Philip; Tiefenbacher, Felix; Böhi, Pascal; Kaltenbaek, Rainer; Jennewein, Thomas; Zeilinger, Anton

    2007-01-04

    As information carriers in quantum computing, photonic qubits have the advantage of undergoing negligible decoherence. However, the absence of any significant photon-photon interaction is problematic for the realization of non-trivial two-qubit gates. One solution is to introduce an effective nonlinearity by measurements resulting in probabilistic gate operations. In one-way quantum computation, the random quantum measurement error can be overcome by applying a feed-forward technique, such that the future measurement basis depends on earlier measurement results. This technique is crucial for achieving deterministic quantum computation once a cluster state (the highly entangled multiparticle state on which one-way quantum computation is based) is prepared. Here we realize a concatenated scheme of measurement and active feed-forward in a one-way quantum computing experiment. We demonstrate that, for a perfect cluster state and no photon loss, our quantum computation scheme would operate with good fidelity and that our feed-forward components function with very high speed and low error for detected photons. With present technology, the individual computational step (in our case the individual feed-forward cycle) can be operated in less than 150 ns using electro-optical modulators. This is an important result for the future development of one-way quantum computers, whose large-scale implementation will depend on advances in the production and detection of the required highly entangled cluster states.

  2. Elementary EFL Teachers' Computer Phobia and Computer Self-Efficacy in Taiwan

    Science.gov (United States)

    Chen, Kate Tzuching

    2012-01-01

    The advent and application of computer and information technology has increased the overall success of EFL teaching; however, such success is hard to assess, and teachers prone to computer avoidance face negative consequences. Two major obstacles are high computer phobia and low computer self-efficacy. However, little research has been carried out…

  3. The Consequences of Easy Credit Policy, High Gearing, and Firms’ Profitability in Pakistan’s Textile Sector: A Panel Data Analysis

    OpenAIRE

    Ijaz Hussain

    2012-01-01

    This study uses panel data on 75 textile firms for the period 2000–09 to examine the consequences of an easy credit policy followed by high gearing, increased financing costs, and other determinants of corporate profitability. Five out of nine explanatory variables—including gearing, financing costs, inflation, tax provisions, and the industry’s capacity utilization ratio—have a negative impact, while the remaining four variables—working capital management, asset turnover, exports, competitiv...

  4. Introduction to massively-parallel computing in high-energy physics

    CERN Document Server

    AUTHOR|(CDS)2083520

    1993-01-01

    Ever since computers were first used for scientific and numerical work, there has existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time-scales. However, the vast leaps in processor performance achieved through advances in semi-conductor science have reached a hiatus as the technology comes up against the physical limits of the speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (S...

  5. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  6. Linear static structural and vibration analysis on high-performance computers

    Science.gov (United States)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  7. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  11. A High-Speed KDL-RAM File System for Parallel Computers

    Science.gov (United States)

    1990-06-22

    Parallel Computers PE - 63223C PR - 552354C0 6. AUTOR(S)WU - DN155-097 C. Sverace,*T. J Rosenau, and S. Pramanik* 7. PERFORMING ORGANIZATION NAME(S) AND...0 11C TIS GA&I DnTi T "l ii’ i E ’ŗ 3 A HIGH-SPEED KDL-RAM FILE SYSTEM FOR PARALLEL COMPUTERS 1.0 INTRODUCTION A multiprocessor, main-memory...BOTTLENECK The first problem with high-speed reading and writing in shared-memory parallel computers is the memory-access bottleneck caused because several

  12. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    Science.gov (United States)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  13. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  14. Low-Probability High-Consequence (LPHC) Failure Events in Geologic Carbon Sequestration Pipelines and Wells: Framework for LPHC Risk Assessment Incorporating Spatial Variability of Risk

    Energy Technology Data Exchange (ETDEWEB)

    Oldenburg, Curtis M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Budnitz, Robert J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-08-31

    If Carbon dioxide Capture and Storage (CCS) is to be effective in mitigating climate change, it will need to be carried out on a very large scale. This will involve many thousands of miles of dedicated high-pressure pipelines in order to transport many millions of tonnes of CO2 annually, with the CO2 delivered to many thousands of wells that will inject the CO2 underground. The new CCS infrastructure could rival in size the current U.S. upstream natural gas pipeline and well infrastructure. This new infrastructure entails hazards for life, health, animals, the environment, and natural resources. Pipelines are known to rupture due to corrosion, from external forces such as impacts by vehicles or digging equipment, by defects in construction, or from the failure of valves and seals. Similarly, wells are vulnerable to catastrophic failure due to corrosion, cement degradation, or operational mistakes. While most accidents involving pipelines and wells will be minor, there is the inevitable possibility of accidents with very high consequences, especially to public health. The most important consequence of concern is CO2 release to the environment in concentrations sufficient to cause death by asphyxiation to nearby populations. Such accidents are thought to be very unlikely, but of course they cannot be excluded, even if major engineering effort is devoted (as it will be) to keeping their probability low and their consequences minimized. This project has developed a methodology for analyzing the risks of these rare but high-consequence accidents, using a step-by-step probabilistic methodology. A key difference between risks for pipelines and wells is that the former are spatially distributed along the pipe whereas the latter are confined to the vicinity of the well. Otherwise, the methodology we develop for risk assessment of pipeline and well failures is similar and provides an analysis both of the annual probabilities of

  15. DOE Greenbook - Needs and Directions in High-Performance Computing for the Office of Science

    Energy Technology Data Exchange (ETDEWEB)

    Rotman, D; Harding, P

    2002-04-01

    researchers. (1) High-Performance Computing Technology; (2) Advanced Software Technology and Algorithms; (3) Energy Sciences Network; and (4) Basic Research and Human Resources. In addition to the availability from the vendor community, these components determine the implementation and direction of the development of the supercomputing resources for the OS community. In this document we will identify scientific and computational needs from across the five Office of Science organizations: High Energy and Nuclear Physics, Basic Energy Sciences, Fusion Energy Science, Biological and Environmental Research, and Advanced Scientific Computing Research. We will also delineate the current suite of NERSC computational and human resources. Finally, we will provide a set of recommendations that will guide the utilization of current and future computational resources at the DOE NERSC.

  16. Developing a High Performance Software Library with MPI and CUDA for Matrix Computations

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2014-04-01

    Full Text Available Nowadays, the paradigm of parallel computing is changing. CUDA is now a popular programming model for general purpose computations on GPUs and a great number of applications were ported to CUDA obtaining speedups of orders of magnitude comparing to optimized CPU implementations. Hybrid approaches that combine the message passing model with the shared memory model for parallel computing are a solution for very large applications. We considered a heterogeneous cluster that combines the CPU and GPU computations using MPI and CUDA for developing a high performance linear algebra library. Our library deals with large linear systems solvers because they are a common problem in the fields of science and engineering. Direct methods for computing the solution of such systems can be very expensive due to high memory requirements and computational cost. An efficient alternative are iterative methods which computes only an approximation of the solution. In this paper we present an implementation of a library that uses a hybrid model of computation using MPI and CUDA implementing both direct and iterative linear systems solvers. Our library implements LU and Cholesky factorization based solvers and some of the non-stationary iterative methods using the MPI/CUDA combination. We compared the performance of our MPI/CUDA implementation with classic programs written to be run on a single CPU.

  17. Can everyone become highly intelligent? Cultural differences in and societal consequences of beliefs about the universal potential for intelligence.

    Science.gov (United States)

    Rattan, Aneeta; Savani, Krishna; Naidu, N V R; Dweck, Carol S

    2012-11-01

    We identify a novel dimension of people's beliefs about intelligence: beliefs about the potential to become highly intelligent. Studies 1-3 found that in U.S. American contexts, people tend to believe that only some people have the potential to become highly intelligent. In contrast, in South Asian Indian contexts, people tend to believe that most people have the potential to become highly intelligent. To examine the implications of these beliefs, Studies 4-6 measured and manipulated Americans' beliefs about the potential for intelligence and found that the belief that everyone can become highly intelligent predicted increased support for policies that distribute resources more equally across advantaged and disadvantaged social groups. These findings suggest that the belief that only some people have the potential to become highly intelligent is a culturally shaped belief, and one that can lead people to oppose policies aimed at redressing social inequality. (c) 2012 APA, all rights reserved.

  18. Girls in computer science: A female only introduction class in high school

    Science.gov (United States)

    Drobnis, Ann W.

    This study examined the impact of an all girls' classroom environment in a high school introductory computer science class on the student's attitudes towards computer science and their thoughts on future involvement with computer science. It was determined that an all girls' introductory class could impact the declining female enrollment and female students' efficacy towards computer science. This research was conducted in a summer school program through a regional magnet school for science and technology which these students attend during the school year. Three different groupings of students were examined for the research: female students in an all girls' class, female students in mixed-gender classes and male students in mixed-gender classes. A survey, Attitudes about Computers and Computer Science (ACCS), was designed to obtain an understanding of the students' thoughts, preconceptions, attitude, knowledge of computer science, and future intentions around computer science, both in education and career. Students in all three groups were administered the ACCS prior to taking the class and upon completion of the class. In addition, students in the all girls' class wrote in a journal throughout the course, and some of those students were also interviewed upon completion of the course. The data was analyzed using quantitative and qualitative techniques. While there were no major differences found in the quantitative data, it was determined that girls in the all girls' class were truly excited by what they had learned and were more open to the idea of computer science being a part of their future.

  19. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Science.gov (United States)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  20. MOTIVATION AND CONSEQUENCE OF INDIVIDUAL’S INVOLVEMENT IN SOCIAL NETWORK SITES: A STUDY OF SOCIAL COMPUTING OF INTER COLLECTIVISTINDIVIDUALIST CULTURAL VALUE

    OpenAIRE

    Abdillah, Willy; HM, Jogiyanto; Handoko, Hani

    2015-01-01

    This research aims to examine the empirical model of social computing. Research model is developed upon the social influence factors, technology acceptance model, psycho-social wellbeing, and culture value. Research design employed online survey questionnaire. Data of 433 samples were analyzed using Partial Least Square (PLS) technique. Results suggest that proposed model has met criteria of goodness-of-fit model and indicated that Identification and Compliant are the motivation factors of de...

  1. Proximity to a high traffic road: glucocorticoid and life history consequences for nestling white-crowned sparrows.

    Science.gov (United States)

    Crino, O L; Van Oorschot, B Klaassen; Johnson, E E; Malisch, J L; Breuner, C W

    2011-09-01

    Roads have been associated with decreased reproductive success and biodiversity in avian communities and increased physiological stress in adult birds. Alternatively, roads may also increase food availability and reduce predator pressure. Previous studies have focused on adult birds, but nestlings may also be susceptible to the detrimental impacts of roads. We examined the effects of proximity to a road on nestling glucocorticoid activity and growth in the mountain white-crowned sparrow (Zonotrichia leucophrys oriantha). Additionally, we examined several possible indirect factors that may influence nestling corticosterone (CORT) activity secretion in relation to roads. These indirect effects include parental CORT activity, nest-site characteristics, and parental provisioning. And finally, we assessed possible fitness consequences of roads through measures of fledging success. Nestlings near roads had increased CORT activity, elevated at both baseline and stress-induced levels. Surprisingly, these nestlings were also bigger. Generally, greater corticosterone activity is associated with reduced growth. However, the hypothalamic-pituitary-adrenal axis matures through the nestling period (as nestlings get larger, HPA-activation is greater). Although much of the variance in CORT responses was explained by body size, nestling CORT responses were higher close to roads after controlling for developmental differences. Indirect effects of roads may be mediated through paternal care. Nestling CORT responses were correlated with paternal CORT responses and paternal provisioning increased near roads. Hence, nestlings near roads may be larger due to increased paternal attentiveness. And finally, nest predation was higher for nests close to the road. Roads have apparent costs for white-crowned sparrow nestlings--increased predation, and apparent benefits--increased size. The elevation in CORT activity seems to reflect both increased size (benefit) and elevation due to road

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  3. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  4. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  6. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  7. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  8. Increasing high school girls' exposure to computing activities with e-textiles: challenges and lessons learned

    DEFF Research Database (Denmark)

    Borsotti, Valeria

    2017-01-01

    The number of female students in computer science degrees has been rapidly declining in Denmark in the past 40 years, as in many other European and North-American countries. The main reasons behind this phenomenon are widespread gender stereotypes about who is best suited to pursue a career in CS...... pilot workshop organized by the IT University of Copenhagen which targeted high school girls. The workshop aimed to introduce the girls to coding and computing through hands-on e-textiles activities realized with the Protosnap Lilypad Development board. This contribution discusses the advantages......; stereotypes about computing as a ‘male’ domain; widespread lack of pre-college CS education and perceptions of computing as not socially relevant. STEAM activities have often been used to bridge the gender gap and to broaden the appeal of computing among children and youth. This contribution examines a STEAM...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  10. Fragmentation of urban forms and the environmental consequences: results from a high-spatial resolution model system

    Science.gov (United States)

    Tang, U. W.; Wang, Z. S.

    2008-10-01

    Each city has its unique urban form. The importance of urban form on sustainable development has been recognized in recent years. Traditionally, air quality modelling in a city is in a mesoscale with grid resolution of kilometers, regardless of its urban form. This paper introduces a GIS-based air quality and noise model system developed to study the built environment of highly compact urban forms. Compared with traditional mesoscale air quality model system, the present model system has a higher spatial resolution down to individual buildings along both sides of the street. Applying the developed model system in the Macao Peninsula with highly compact urban forms, the average spatial resolution of input and output data is as high as 174 receptor points per km2. Based on this input/output dataset with a high spatial resolution, this study shows that even the highly compact urban forms can be fragmented into a very small geographic scale of less than 3 km2. This is due to the significant temporal variation of urban development. The variation of urban form in each fragment in turn affects air dispersion, traffic condition, and thus air quality and noise in a measurable scale.

  11. P2P Technology for High-Performance Computing: An Overview

    Science.gov (United States)

    Follen, Gregory J. (Technical Monitor); Berry, Jason

    2003-01-01

    The transition from cluster computing to peer-to-peer (P2P) high-performance computing has recently attracted the attention of the computer science community. It has been recognized that existing local networks and dedicated clusters of headless workstations can serve as inexpensive yet powerful virtual supercomputers. It has also been recognized that the vast number of lower-end computers connected to the Internet stay idle for as long as 90% of the time. The growing speed of Internet connections and the high availability of free CPU time encourage exploration of the possibility to use the whole Internet rather than local clusters as massively parallel yet almost freely available P2P supercomputer. As a part of a larger project on P2P high-performance computing, it has been my goal to compile an overview of the 2P2 paradigm. I have studied various P2P platforms and I have compiled systematic brief descriptions of their most important characteristics. I have also experimented and obtained hands-on experience with selected P2P platforms focusing on those that seem promising with respect to P2P high-performance computing. I have also compiled relevant literature and web references. I have prepared a draft technical report and I have summarized my findings in a poster paper.

  12. Grand Challenges: High Performance Computing and Communications. The FY 1992 U.S. Research and Development Program.

    Science.gov (United States)

    Federal Coordinating Council for Science, Engineering and Technology, Washington, DC.

    This report presents a review of the High Performance Computing and Communications (HPCC) Program, which has as its goal the acceleration of the commercial availability and utilization of the next generation of high performance computers and networks in order to: (1) extend U.S. technological leadership in high performance computing and computer…

  13. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  14. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets.

    Science.gov (United States)

    Plaku, Erion; Kavraki, Lydia E

    2007-03-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors.

  15. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration; Velikhov, Vasily; Konoplich, Rostislav

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of computer institutes' clusters, commercial computing resources and supercomputers. To perform precision measurements of the Higgs boson properties in these realities, it is also highly required to have effective instruments to simulate kinematic distributions of signal events. In this talk we give a brief description of the modern distribution reconstruction method called Morphing and perform few efficiency tests to demonstrate its potential. These studies have been performed on the WLCG and Kurchatov Institute’s Data Processing Center, including Tier-1 GRID site and supercomputer as well. We also analyze the CPU efficienc...

  16. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing.

    Science.gov (United States)

    Shatil, Anwar S; Younas, Sohail; Pourreza, Hossein; Figley, Chase R

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  17. The Relationship Between Utilization of Computer Games and Spatial Abilities Among High School Students

    Directory of Open Access Journals (Sweden)

    Vahid Motamedi

    2015-07-01

    Full Text Available This study aimed at investigating the relationship between computer game use and spatial abilities among high school students. The sample consisted of 300 high school male students selected through multi-stage cluster sampling. Data gathering tools consisted of a researcher made questionnaire (to collect information on computer game usage and the Newton and Bristol spatial ability questionnaire with reliability value of .85. Data were analyzed using Pearson’s correlation coefficient. Results showed that there was a meaningful relationship between the use of computer games and spatial ability (r = .59 and p = 00.00, there was a meaningful relationship between the use of computer games and the spatial perceived ability (r = .60 and p = .00, there was a meaningful relationship between the use of computer games and mental rotation ability (r = .48 and p = .00 and there was a meaningful relationship between computer game use and spatial visualization ability (r = .48 and p = .00. In general, the findings showed there was a positive and a significant relationship between the use of computer games and spatial abilities in students.

  18. Thinking processes used by high-performing students in a computer programming task

    Directory of Open Access Journals (Sweden)

    Marietjie Havenga

    2011-07-01

    Full Text Available Computer programmers must be able to understand programming source code and write programs that execute complex tasks to solve real-world problems. This article is a trans- disciplinary study at the intersection of computer programming, education and psychology. It outlines the role of mental processes in the process of programming and indicates how successful thinking processes can support computer science students in writing correct and well-defined programs. A mixed methods approach was used to better understand the thinking activities and programming processes of participating students. Data collection involved both computer programs and students’ reflective thinking processes recorded in their journals. This enabled analysis of psychological dimensions of participants’ thinking processes and their problem-solving activities as they considered a programming problem. Findings indicate that the cognitive, reflective and psychological processes used by high-performing programmers contributed to their success in solving a complex programming problem. Based on the thinking processes of high performers, we propose a model of integrated thinking processes, which can support computer programming students. Keywords: Computer programming, education, mixed methods research, thinking processes.  Disciplines: Computer programming, education, psychology

  19. SCEAPI: A unified Restful Web API for High-Performance Computing

    Science.gov (United States)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  20. REEFER: a digital computer program for the simulation of high energy electron tubes. [Reefer

    Energy Technology Data Exchange (ETDEWEB)

    Boers, J.E.

    1976-11-01

    A digital computer program for the simulation of very high-energy electron and ion beams is described. The program includes space-charge effects through the solution of Poisson's equation and magnetic effects (both induced and applied) through the relativistic trajectory equations. Relaxation techniques are employed while alternately computing electric fields and trajectories. Execution time is generally less than 15 minutes on a CDC 6600 digital computer. Either space-charge-limited or field-emission sources may be simulated. The input data is described in detail and an example data set is included.

  1. A Queue Simulation Tool for a High Performance Scientific Computing Center

    Science.gov (United States)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  2. Building professional identity as computer science teachers: Supporting high school computer science teachers through reflection and community building

    Science.gov (United States)

    Ni, Lijun

    Computing education requires qualified computing teachers. The reality is that too few high schools in the U.S. have computing/computer science teachers with formal computer science (CS) training, and many schools do not have CS teacher at all. Moreover, teacher retention rate is often low. Beginning teacher attrition rate is particularly high in secondary education. Therefore, in addition to the need for preparing new CS teachers, we also need to support those teachers we have recruited and trained to become better teachers and continue to teach CS. Teacher education literature, especially teacher identity theory, suggests that a strong sense of teacher identity is a major indicator or feature of committed, qualified teachers. However, under the current educational system in the U.S., it could be challenging to establish teacher identity for high school (HS) CS teachers, e.g., due to a lack of teacher certification for CS. This thesis work centers upon understanding the sense of identity HS CS teachers hold and exploring ways of supporting their identity development through a professional development program: the Disciplinary Commons for Computing Educators (DCCE). DCCE has a major focus on promoting reflection on teaching practice and community building. With scaffolded activities such as course portfolio creation, peer review and peer observation among a group of HS CS teachers, it offers opportunities for CS teachers to explicitly reflect on and narrate their teaching, which is a central process of identity building through their participation within the community. In this thesis research, I explore the development of CS teacher identity through professional development programs. I first conducted an interview study with local HS CS teachers to understand their sense of identity and factors influencing their identity formation. I designed and enacted the professional program (DCCE) and conducted case studies with DCCE participants to understand how their

  3. Virtualization in High-Performance Computing: An Analysis of Physical and Virtual Node Performance

    OpenAIRE

    Jungels, Glendon M

    2012-01-01

    The process of virtualizing computing resources allows an organization to make more efficient use of it's resources. In addtion, this process enables flexibility that deployment on raw hardware does not. Virtualization, however, comes with a performance penalty. This study examines the performance of utilizing virtualization technology for use in high performance computing to determine the suitibility of using this technology. It makes use of a small (4 node) virtual cluster as well as a ...

  4. A Distributed Multi Agents Based Platform for High Performance Computing Infrastructures

    OpenAIRE

    Kiourt, Chairi; Kalles, Dimitris

    2016-01-01

    This work introduces a novel, modular, layered web based platform for managing machine learning experiments on grid-based High Performance Computing infrastructures. The coupling of the communication services offered by the grid, with an administration layer and conventional web server programming, via a data synchronization utility, leads to the straightforward development of a web-based user interface that allows the monitoring and managing of diverse online distributed computing applicatio...

  5. Challenges and Opportunities for Security in High-Performance Computing Environments

    OpenAIRE

    Peisert, S

    2017-01-01

    High-performance computing (HPC) environments have numerous distinctive elements that make securing them different than securing traditional computing systems. In some cases this is due to the way that HPC systems are implemented. In other cases, it is due to the way that HPC systems are used, or a combination of both issues. In this article, we discuss these distinctions and also discuss which security procedures and mechanisms are and are not appropriate in HPC environments, and where gaps ...

  6. Business Models of High Performance Computing Centres in Higher Education in Europe

    Science.gov (United States)

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  7. The Relationship between Internet and Computer Game Addiction Level and Shyness among High School Students

    Science.gov (United States)

    Ayas, Tuncay

    2012-01-01

    This study is conducted to determine the relationship between the internet and computer games addiction level and the shyness among high school students. The participants of the study consist of 365 students attending high schools in Giresun city centre during 2009-2010 academic year. As a result of the study a positive, meaningful, and high…

  8. Comparison of Online Game Addiction in High School Students with Habitual Computer Use and Online Gaming

    Science.gov (United States)

    Müezzin, Emre

    2015-01-01

    The aim of this study is to compare the online game addiction in high school students with the habitual computer use and online gaming. The sample selected through the criterion sampling method, consists of 61.8% (n = 81) female, 38.2% (n = 50) male, 131 high school students. The "Online Game Addiction Scale" developed by Kaya and Basol…

  9. Memory as a Factor in the Computational Efficiency of Dyslexic Children with High Abstract Reasoning Ability.

    Science.gov (United States)

    Steeves, K. Joyce

    1983-01-01

    A study involving dyslexic children (10-14 years old) with average and high reasoning ability and nondyslexic children with and without superior mathematical ability suggested that the high reasoning dyslexic Ss had similar abstract reasoning ability but lower computation and memory skills than mathematically gifted nondyslexic Ss. (CL)

  10. High-quality forage can replace concentrate when cows enter the deposition phase without negative consequences for milk production

    DEFF Research Database (Denmark)

    Hymøller, Lone; Alstrup, Lene; Larsen, Mette Krogh

    2014-01-01

    Mobilization and deposition in cows are different strategies of metabolism; hence, the aim was to study the possibility of reducing the crude protein (CP) supply during deposition to limit the use of protein supplements and minimize the environmental impact. A total of 61 Jersey and 107 Holstein ......, concentrate in the mixed ration can be substituted with high-quality forage during deposition without negative effects on milk yield and composition when a sufficient CP level is ensured....

  11. High-resolution computed tomography and histopathological findings in hypersensitivity pneumonitis: a pictorial essay

    Energy Technology Data Exchange (ETDEWEB)

    Torres, Pedro Paulo Teixeira e Silva; Moreira, Marise Amaral Reboucas; Silva, Daniela Graner Schuwartz Tannus; Moreira, Maria Auxiliadora do Carmo [Universidade Federal de Goias (UFG), Goiania, GO (Brazil); Gama, Roberta Rodrigues Monteiro da [Hospital do Cancer de Barretos, Barretos, SP (Brazil); Sugita, Denis Masashi, E-mail: pedroptstorres@yahoo.com.br [Anapolis Unievangelica, Anapolis, GO (Brazil)

    2016-03-15

    Hypersensitivity pneumonitis is a diffuse interstitial and granulomatous lung disease caused by the inhalation of any one of a number of antigens. The objective of this study was to illustrate the spectrum of abnormalities in high-resolution computed tomography and histopathological findings related to hypersensitivity pneumonitis. We retrospectively evaluated patients who had been diagnosed with hypersensitivity pneumonitis (on the basis of clinical-radiological or clinical-radiological-pathological correlations) and had undergone lung biopsy. Hypersensitivity pneumonitis is clinically divided into acute, subacute, and chronic forms; high-resolution computed tomography findings correlate with the time of exposure; and the two occasionally overlap. In the subacute form, centrilobular micronodules, ground glass opacities, and air trapping are characteristic high-resolution computed tomography findings, whereas histopathology shows lymphocytic inflammatory infiltrates, bronchiolitis, variable degrees of organizing pneumonia, and giant cells. In the chronic form, high-resolution computed tomography shows traction bronchiectasis, honeycombing, and lung fibrosis, the last also being seen in the biopsy sample. A definitive diagnosis of hypersensitivity pneumonitis can be made only through a multidisciplinary approach, by correlating clinical findings, exposure history, high-resolution computed tomography findings, and lung biopsy findings. (author)

  12. A new dynamic model for highly efficient mass transfer in aerated bioreactors and consequences for kLa identification.

    Science.gov (United States)

    Müller, Stefan; Murray, Douglas B; Machne, Rainer

    2012-12-01

    Gas-liquid mass transfer is often rate-limiting in laboratory and industrial cultures of aerobic or autotrophic organisms. The volumetric mass transfer coefficient k(L) a is a crucial characteristic for comparing, optimizing, and upscaling mass transfer efficiency of bioreactors. Reliable dynamic models and resulting methods for parameter identification are needed for quantitative modeling of microbial growth dynamics. We describe a laboratory-scale stirred tank reactor (STR) with a highly efficient aeration system (k(L) a ≈ 570 h(-1)). The reactor can sustain yeast culture with high cell density and high oxygen uptake rate, leading to a significant drop in gas concentration from inflow to outflow (by 21%). Standard models fail to predict the observed mass transfer dynamics and to identify k(L) a correctly. In order to capture the concentration gradient in the gas phase, we refine a standard ordinary differential equation (ODE) model and obtain a system of partial integro-differential equations (PIDE), for which we derive an approximate analytical solution. Specific reactor configurations, in particular a relatively short bubble residence time, allow a quasi steady-state approximation of the PIDE system by a simpler ODE model which still accounts for the concentration gradient. Moreover, we perform an appropriate scaling of all variables and parameters. In particular, we introduce the dimensionless overall efficiency κ, which is more informative than k(L) a since it combines the effects of gas inflow, exchange, and solution. Current standard models of mass transfer in laboratory-scale aerated STRs neglect the gradient in the gas concentration, which arises from highly efficient bubbling systems and high cellular exchange rates. The resulting error in the identification of κ (and hence k(L) a) increases dramatically with increasing mass transfer efficiency. Notably, the error differs between cell-free and culture-based methods of parameter identification

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  14. An open, parallel I/O computer as the platform for high-performance, high-capacity mass storage systems

    Science.gov (United States)

    Abineri, Adrian; Chen, Y. P.

    1992-01-01

    APTEC Computer Systems is a Portland, Oregon based manufacturer of I/O computers. APTEC's work in the context of high density storage media is on programs requiring real-time data capture with low latency processing and storage requirements. An example of APTEC's work in this area is the Loral/Space Telescope-Data Archival and Distribution System. This is an existing Loral AeroSys designed system, which utilizes an APTEC I/O computer. The key attributes of a system architecture that is suitable for this environment are as follows: (1) data acquisition alternatives; (2) a wide range of supported mass storage devices; (3) data processing options; (4) data availability through standard network connections; and (5) an overall system architecture (hardware and software designed for high bandwidth and low latency). APTEC's approach is outlined in this document.

  15. Myeloid-specific deletion of NOX2 prevents the metabolic and neurologic consequences of high fat diet.

    Directory of Open Access Journals (Sweden)

    Jennifer K Pepping

    Full Text Available High fat diet-induced obesity is associated with inflammatory and oxidative signaling in macrophages that likely participates in metabolic and physiologic impairment. One key factor that could drive pathologic changes in macrophages is the pro-inflammatory, pro-oxidant enzyme NADPH oxidase. However, NADPH oxidase is a pleiotropic enzyme with both pathologic and physiologic functions, ruling out indiscriminant NADPH oxidase inhibition as a viable therapy. To determine if targeted inhibition of monocyte/macrophage NADPH oxidase could mitigate obesity pathology, we generated mice that lack the NADPH oxidase catalytic subunit NOX2 in myeloid lineage cells. C57Bl/6 control (NOX2-FL and myeloid-deficient NOX2 (mNOX2-KO mice were given high fat diet for 16 weeks, and subject to comprehensive metabolic, behavioral, and biochemical analyses. Data show that mNOX2-KO mice had lower body weight, delayed adiposity, attenuated visceral inflammation, and decreased macrophage infiltration and cell injury in visceral adipose relative to control NOX2-FL mice. Moreover, the effects of high fat diet on glucose regulation and circulating lipids were attenuated in mNOX2-KO mice. Finally, memory was impaired and markers of brain injury increased in NOX2-FL, but not mNOX2-KO mice. Collectively, these data indicate that NOX2 signaling in macrophages participates in the pathogenesis of obesity, and reinforce a key role for macrophage inflammation in diet-induced metabolic and neurologic decline. Development of macrophage/immune-specific NOX-based therapies could thus potentially be used to preserve metabolic and neurologic function in the context of obesity.

  16. On the counterintuitive consequences of high-performance work practices in cross-border post-merger human integration

    DEFF Research Database (Denmark)

    Vasilaki, A.; Smith, Pernille; Giangreco, A.

    2012-01-01

    , such as communication, employee involvement, and team building, may not always produce the expected effects on human integration; rather, it can have the opposite effects if top management does not closely monitor the immediate results of deploying such practices. Implications for managers dealing with post......, this article investigates the impact of systemic and integrated human resource practices [i.e., high-performance work practices (HPWPs)] on human integration and how their implementation affects employees' behaviours and attitudes towards post-merger human integration. We find that the implementation of HPWPs...

  17. The health of homeless people in high-income countries: descriptive epidemiology, health consequences, and clinical and policy recommendations

    Science.gov (United States)

    Fazel, Seena; Geddes, John R; Kushel, Margot

    2015-01-01

    In the European Union, more than 400 000 individuals are homeless on any one night and more than 600 000 are homeless in the USA. The causes of homelessness are an interaction between individual and structural factors. Individual factors include poverty, family problems, and mental health and substance misuse problems. The availability of low-cost housing is thought to be the most important structural determinant for homelessness. Homeless people have higher rates of premature mortality than the rest of the population, especially from suicide and unintentional injuries, and an increased prevalence of a range of infectious diseases, mental disorders, and substance misuse. High rates of non-communicable diseases have also been described with evidence of accelerated ageing. Although engagement with health services and adherence to treatments is often compromised, homeless people typically attend the emergency department more often than non-homeless people. We discuss several recommendations to improve the surveillance of morbidity and mortality in homeless people. Programmes focused on high-risk groups, such as individuals leaving prisons, psychiatric hospitals, and the child welfare system, and the introduction of national and state-wide plans that target homeless people are likely to improve outcomes. PMID:25390578

  18. The behavioral and health consequences of sleep deprivation among U.S. high school students: relative deprivation matters.

    Science.gov (United States)

    Meldrum, Ryan Charles; Restivo, Emily

    2014-06-01

    To evaluate whether the strength of the association between sleep deprivation and negative behavioral and health outcomes varies according to the relative amount of sleep deprivation experienced by adolescents. 2011 Youth Risk Behavior Survey data of high school students (N=15,364) were analyzed. Associations were examined on weighted data using logistic regression. Twelve outcomes were examined, ranging from weapon carrying to obesity. The primary independent variable was a self-reported measure of average number of hours slept on school nights. Participants who reported deprivations in sleep were at an increased risk of a number of negative outcomes. However, this varied considerably across different degrees of sleep deprivation. For each of the outcomes considered, those who slept less than 5h were more likely to report negative outcomes (adjusted odds ratios ranging from 1.38 to 2.72; psleeping 8 or more hours. However, less extreme forms of sleep deprivation were, in many instances, unrelated to the outcomes considered. Among U.S. high school students, deficits in sleep are significantly and substantively associated with a variety of negative outcomes, and this association is particularly pronounced for students achieving fewer than 5h of sleep at night. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. High-quality forage can replace concentrate when cows enter the deposition phase without negative consequences for milk production.

    Science.gov (United States)

    Hymøller, L; Alstrup, L; Larsen, M K; Lund, P; Weisbjerg, M R

    2014-07-01

    Mobilization and deposition in cows are different strategies of metabolism; hence, the aim was to study the possibility of reducing the crude protein (CP) supply during deposition to limit the use of protein supplements and minimize the environmental impact. A total of 61 Jersey and 107 Holstein cows were assigned to 4 mixed rations in a 2 × 2 factorial design with 2 concentrate to forage ratios (CFR) and 2 CP levels: high CFR (40:60) and recommended CP [16% of dry matter (DM); HCFR-RP], high CFR (40:60) and low CP (14% of DM; HCFR-LP), low CFR (30:70) and recommended CP (16% of DM; LCFR-RP), and low CFR (30:70) and low CP (14% of DM; LCFR-LP), where RP met the Danish recommendations. Cows were fed concentrate in an automatic milking unit. After calving, cows were fed HCFR-RP until entering deposition, defined as 11 kg (Jersey) or 15 kg (Holstein) of weight gain from the lowest weight after calving. Subsequently, cows either remained on HCFR-RP or changed to one of the other mixed rations. Comparing strategies during wk 9 to 30 of lactation showed higher dry matter intake (DMI) of mixed ration on HCFR compared with LCFR and on RP compared with LP. The DMI of the concentrate was higher on LCFR than on HCFR and higher on LP than on RP, resulting in overall higher DMI on HCFR and RP than on LCFR and LP. Crude protein intakes were higher on RP than on LP and starch intakes were higher on HCFR than on LCFR. Intakes of neutral detergent fiber tended to be higher on LCFR than on HCFR. Intakes of net energy for lactation were affected by CFR and CP level, with a higher intake on HCFR and RP than on LCFR and LP. No interactions were found between CFR and CP level for any feed intake variables. Yields of milk and energy-corrected milk were higher on RP than on LP, with no difference in yield persistency after the ration change. Milk composition did not differ among strategies but the protein to fat ratio was higher on HCFR than on LCFR and tended to be lower on RP than on

  20. High-performance computing for structural mechanics and earthquake/tsunami engineering

    CERN Document Server

    Hori, Muneo; Ohsaki, Makoto

    2016-01-01

    Huge earthquakes and tsunamis have caused serious damage to important structures such as civil infrastructure elements, buildings and power plants around the globe.  To quantitatively evaluate such damage processes and to design effective prevention and mitigation measures, the latest high-performance computational mechanics technologies, which include telascale to petascale computers, can offer powerful tools. The phenomena covered in this book include seismic wave propagation in the crust and soil, seismic response of infrastructure elements such as tunnels considering soil-structure interactions, seismic response of high-rise buildings, seismic response of nuclear power plants, tsunami run-up over coastal towns and tsunami inundation considering fluid-structure interactions. The book provides all necessary information for addressing these phenomena, ranging from the fundamentals of high-performance computing for finite element methods, key algorithms of accurate dynamic structural analysis, fluid flows ...